Thursday, June 7, 2007

Introduction

Dear everyone,

This is what I have been doing with my life for the last few years.

No, you don't have to read them if you don't want to.

-Phoebe

Science and Environmental Controversy

HPSC1500 essay, S2 2004



Ecosystems are complex, and our knowledge of them is limited, as the biological scientists who study them are the first to admit. Human social systems are complex too, which is why there is so much work for the ever-growing number of social scientists who study them. Environmental problems by definition are found at the intersection of ecosystems and human social systems, so one should expect them to be doubly complex. (Dryzek, 1997; p.8)



Introduction

Understanding the role of science is critical for the meaningful analysis of environmental controversy. Throughout the course of a controversy, science and scientists will function in many different ways, on behalf of many different actors, including themselves. They support or contradict other expert opinions, produce knowledge and solve problems that they may have themselves uncovered, or even created. Different groups utilise scientific knowledge for their own purposes and will debate which science is more appropriate or correct. Debate over the science of an environmental controversy may form a secondary controversy within the first, or mask a deeper clash of values. Even the perception of science by non-scientists will affect the role science plays in environmental controversy. Despite being a kind of authority, however, science is at the mercy of other agents when it comes to communicating its role and knowledge to the wider public.

Science as an authority

The starting point of many environmental controversies are in scientific enquiry. Some issues are not able to be physically observed by ordinary people, such as ozone depletion, while other controversies of risk, such as the transmission of ‘mad cow disease’ (BSE) across species, may not be within the realm of the ordinary person’s experience. Uncovering these issues and bringing them to public attention means that science and the discourse of science is responsible for framing these issues and is often assumed to be central to both the controversy and its possible resolution.

Traditionally, science has been seen as an authority; objective, unified and dependable. According to Barry (1999), part of this authority was due to unity of opinion within science, which was seen as a homogenous establishment. However it is clear that during the course of an environmental controversy, science as well as public opinion is divided on many issues. Obviously, where scientific opinion and beliefs are divided and enlisted as support by different actors within a controversy, it is impossible for science to be seen as a meaningful authority. Roll-Hansen (1994) suggests that, despite this division, we still assume science is correct, but as the opinions of experts vary and we know eventually some beliefs will be disproved, it is tempting to side with the expert whose stance we like best. Yearly (1988) also asserts that scientific knowledge and its authority is not absolute and should in no way be regarded as such. He argues that, instead of being evaluated in terms of the truth of the science, often scientific evidence is evaluated socially, in terms of who produced it. Part of science’s authority therefore comes from the position in society of the scientists or institutions that produced it.

Furthermore, while it may not be accepted or obvious to many scientists, postmodernist theory recognises scientific knowledge as being just one kind of knowledge within human experience (Barry, 1999). Irwin (1995) also notes that the discourse of science imposes this one kind of knowledge and considers it alone to be valid or authoritative. The credibility of science is evaluated by ordinary people socially, not scientifically, as illustrated by Wynne’s (1989) account of the experience of sheepfarmers in dealing with the scientists after the Chernobyl fallout. This example also illustrated the practical consequences of scientists failing to grasp the authority of the farmers in their special area of expertise – sheep. Although one of the goals of science is to describe the natural world, the abstract knowledge science produces require additional assumptions and non-scientific applications of values to be implemented in the real world. To be useful in the real world, scientific knowledge needs to be combined with other forms of knowledge (Wynne, 1989).

Science as a tool

With the destruction of the orthodox view of science as an authority in its own right, analysis of the role of science in environmental controversy has focussed on how science is used by different groups to further their views and support their own arguments. In their discussion of analytical methods, Martin and Richards (1995) describes the group politics approach to controversy analysis as regarding science as a resource that can be mobilised in support of a group’s position. Martin’s (1988) paper on the fluoridation controversy also treats science in this way. Irwin (1995) echoes this sentiment when he describes science as a legitimating tool, both by the stances it supports and the way it frames the controversy. He also describes this use of science as “politics by other means” (p.49).

Dryzek (1997) places importance on the discourses used by groups as a key to analysing environmental controversy. He claims that discourse conditions the way a group will frame and address the controversy. As previously mentioned, most environmental controversies will be framed by the discourse of science. However, different groups with different worldviews will also attempt to use science to further their own discourse; science therefore will subscribe to the discourse of its sponsor in the importance they place on elements of a controversy and the solutions or actions they propose. Deprived of a social or political agenda of its own, or having it subsumed by the group it works for, science is another means of supporting established discourses, such as economic rationalism or sustainability, or possibly establishing new discourses in the public arena.

The use of science as a tool in public debate may also obscure the way science is still vulnerable to marginalisation within environmental controversy. While the government and corporations have access to the public via commercial media channels and their own forms of communication (mailouts, public meetings, etc), science has no such resources. Roll-Hansen’s (1994) description of the media treatment of acid rain damage to forests in Norway highlights the way in which commercial media selects which scientific knowledge to broadcast according to its own interests, rather than giving a complete or accurate account. So although scientists are actors in environmental controversy and their expertise is used by various stakeholders to support their claims, the roles of science as a stakeholder and independent entity within a controversy is far less important than its legitimating role.

Science as the creator of environmental controversy

Ulrich Beck’s concept of the ‘risk society’, as discussed by Barry (1999) and Irwin (1995) includes the idea that environmental controversies involve science as the underlying cause of environmental controversy because if its role in the development of industrial society. If, however, we accept that science is used to develop technology and further development, perhaps it would be more useful to pass the blame for environmental controversy onto the institutions and governments that support modern industrialism and its social and economic structure. Roll-Hansen (1994) attributes the ineffective use of science by policy-makers as a possible cause of environmental controversy, due to the failure of the relevant authorities to adequately understand the science and its impacts before implementing legislation.

Science as a surrogate controversy

Reid (1995) describes the existence of “meta-problems” (p.15) –interlinked environmental and social concerns that have many causes and dimensions. To effectively treat a meta-problem, all the aspects of the meta-problem must be addressed. For example, the unsustainable development of poor nations is not just a matter lack of scientific knowledge or technology, but a product of the global economic climate they operate in. Science could be called in to provide material solutions to the many controversial environmental problems of development, but the real source of controversy would still be the nation’s economic policy driving the unsustainable development, which in turn would be driven by international market forces.

In a similar vein, Irwin (1995) makes mention of Cotgrove’s argument that environmental controversies are actually founded on clashes between different ideas of morality and society. Ordinary people, therefore, will have minimal interest in the specific technical information of a controversy (Irwin cites the nuclear power debate as an example) and any scientific information provided to them will probably not change their position. What they really object to is not so much the technology as the society that produced it and that the new technology in turn reinforces, yet any debate over the new technology will centre on the risks associated with it. Likewise, Nelkin (1995) suggests that controversies are significant because they are also moral statements about the role of science.

Conclusion

As science is varied and complex, in its institutions and expertise, so to is the role science plays in environmental controversies. Science can be an authority in its own right, but more often has its authority and credibility appropriated by conflicting groups within a controversy. The discourse of science shapes how environmental controversies are seen and how they are debated in the public arena, but science itself is dependent on the discourse of its sponsor in how it frames solutions. Indeed, science is usually responsible for bringing environmental controversies to light, as well as providing the technical basis for their resolution. Less obviously, debate over science and risk in environmental controversy may mask the deeper debate about fundamental social and moral values. Appreciation of the complex and varied role science plays in an environmental controversy is crucial for its meaningful analysis, which in turn is essential for finding ways forward to a possible resolution.


References

Barry, J (1999) Environmental and Social Theory. Routledge, London. pp. 151-175.

Dryzek, J (1997) The Politics of the Earth: Environmental Discourses. Oxford University Press, Oxford. pp. 3-22.

Irwin, A (1995) Citizen Science: a study of people, expertise and sustainable development. Routledge, London. pp. 40-61.

Martin, B (1988) Analysing the fluoridation controversy: resources and structures. Social Studies of Science. 18:331-363.

Martin, B & Richards, E (1995) Scientific knowledge, controversy and public decision-making IN Sheila Jansanoff et al. (eds) Handbook of Science and Technology Studies. Sage, Thousand Oaks. pp. 505-526.

Nelkin, D (1995) 'Science controversies.' IN Sheila Jansanoff et al. (eds) Handbook of Science and Technology Studies. Sage, Thousand Oaks. pp.444-456.

Reid, D (1995) Sustainable Development – An Introductory Guide. Earthscan, London. pp. 3-23

Roll-Hansen, N (1994) Science, politics and the mass media: on biased communication of environmental issues. Science, Technology and Human Values. 19:324-341.

Wynne, B (1989) Sheepfarming after Chernobyl. A case study in communicating scientific information. Environment. 31/2:10-39.

Yearly, S (1988) Science, Technology and Social Change. Unwin Hyman, London. pp. 16-43.

“Hospitals make you sick” - The perceived function of hospitals through history

HPSC2660 essay, S1 2005

The Uses of a Hospital: - By an Hon. Surgeon

FIRST:- To relieve and treat the sick and necessitous poor.

SECOND:- For the training of Nurses.

THIRD:- For the training of Students.

FOURTH:- For the Training and experience of Specialists.

(Sydney Hospital 1918-1919, p.15)

Introduction

Hospitals and hospital-like institutions have had a place in European society for over 2000 years. From being expressions of Christian charity or non-denominational philanthropy to a means of purchasing spiritual rewards or worldly reputation, from refuges for society's outcasts to gateways to death to places of spiritual, moral or physical healing, the perceived function of hospitals is not always consistent or easy to define. This essay will examine various perceptions of the function of hospitals throughout their history and why certain groups came to hold these views, the historical perceptions of hospitals contrasted with modern perceptions. These perceived functions include charity, imprisonment and religious work. A person's class and relationship to a hospital were often the deciding element in how he or she perceived its role in the community and the functions it performed, especially with such larger institutions such as the General Hospital of Paris. The perception that “hospitals make you sick” dates from the medicalisation of hospitals and still exists today, coloured by a a host of other perceptions about the medical profession, public institutions, community health and the consumers' “right to know”.

“Hospitals make you sick”

Hospitals have borne the blame for disease and death since at least the end of the 18th century. The military surgeon Dr Robert Hamilton said in 1787 that “among the causes of sickness and death in the Army are the hospitals themselves” (ed. Poynter, 1964, p.161). Late 18th century hospitals were widely seen, especially by those who were likely to be patients, as “gateways to death”. One observer of the Hôtel Dieu in Paris described the plight of the patients in no uncertain terms: “ these poor wretches come out with diseases they did not have when they went in, and often pass them on to the people they go back to live with” (Diderot in Sand, 1952, p.86-87). Even the Hôtel Dieu staff cannot have thought much of their patient's chances of survival – the hospital charts before the reform had a space for entering the date of death but nowhere to enter the date of recovery (Richmond, 1961). In light of this it seems no wonder the poor people of the time thought of hospitals as a conspiracy to kill them (McKay et al., 1984). The perception that hospitals are detrimental to patients' health continued through the 19th century. According to Granshaw (1994) 19th century Sanitarians were horrified at the death rates of urban hospitals and saw them as inappropriate places to treat the sick, vastly inferior to country hospitals. Our modern perception of hospitals is coloured with all the scepticism, reservation and insecurity we have about all facets of medicine. The public demands transparency and accountability and no longer accepts anything less than the highest standards of professionalism. In a disturbing parallel of Diderot's time, the short time of most patients admitted to hospital spend in-house today means that where an infection is contracted from the hospital, it only becomes obvious after the patient's return home (Ayliffe et al., 1999). Although this hasn't seemed to impact on the public's perception of “preventable” hospital illness and death, which is assumed to occur within hospital walls, the perception is that hospitals should be places of perfect conduct and expertise, but unfortunately are not, and this must be changed. This attitude is succinctly summed up in the words of Helen Hopkins of the Consumer's Health Forum of Australia: “If mortalities are occurring we need to have systems to ensure they don't recur” (Brown, 6 June 2005, p.6).

However, not everyone has held such a dim view of hospitals. Over the ages founders, patrons and subscribers of hospitals have all seen them as beneficial institutions, either for their own personal health or benefit. One of the encouragements to support an 18th century voluntary hospital for those who could afford it was not just the increased standing it bought in the community, but that it would prevent sick beggars roaming from door to door spreading disease (Cartwright, 1977). Similarly, sick servants could be send to hospital rather than endangering the health of the family (ed. Poynter, 1964). For the wealthy of the time, then, hospitals were seen to act as a sort of protection from disease. Before the 15th century, hospital founders and their descendants found that a convenient function of the hospital was accommodating their household if they required it when travelling (Cartwright, 1977). Obviously they couldn't have thought the risk of contracting anything was too great, but of course care of the sick was not the the only function of hospitals at this time – they also offered lodgings to pilgrims, the homeless and the elderly.

The hospital as a charity

The most widespread and usual perception of hospitals, the function that only disappeared when hospitals became the pinnacle of medical technology, was that of a charity. The Christian compassion for the poor and afflicted expressed itself in the form of hospice or hospital, in lazar houses for lepers and in movements like the Hospitallers and later the Catholic Vincentians and Daughters of Charity. Only in recent times has effective healing had a place in hospitals or was expected to. Today we look back on these institutions as fulfilling the roles of aged-care homes, orphanages, homeless shelters or combinations of all these things (Cartwright, 1977). The function of these institutions was not so much healing as caring, providing compassion and practical assistance in the form of shelter, clothing and meals. Oddly enough, while we think of our modern hospitals as being purely places of serious medical practice, Barnes (1961) mentions that hospitals are sometimes seen by marginalised groups, such as unemployed migrants, as a more practical form of welfare, providing meals and accommodation as well as treatment for illness.

The voluntary hospitals of the 18th century were, as the banker Henry Hoare put it, the product of a revival of “the True Christian Spirit of Justice and Charity” (in Cartwright, 1977, p.36). Although extremely selective when admitting patients, the charity function of these hospitals extended beyond the “deserving poor” person actually admitted into the hospital but also to their family, who weren't forced to abandon their jobs or strained to care for and feed the invalid (Risse, 1994). Although these hospitals, founded by the philanthropic and upwardly mobile affluent classes, were primarily for the medical care of the working class, other subscription hospitals had a less medical function. The hospital established by the Royal Navy at Greenwich in 1694 functioned as a pension home for aged or disabled seamen, and sailors paid a monthly contribution in return for the privilege of retiring there (ed. Poynter, 1964).

Another product of the philanthropic revival was the establishment of foundling hospitals. Charles West, the founder of such a hospital, envisaged it as a place where poor sick children were cared for and where mothers whose children were unable to be admitted received instruction on how to care for them (ed. Poynter, 1964). Unfortunately, despite the charitable intentions of the founders and supporters, the reality of these institutions was that they became a place to dump unwanted children. The hospitals weren't just used by the mothers of the poor working class who couldn't afford to feed another child, but also by overburdened parish officials who would send their foundlings to the hospital rather than care for them within the parish (McKay et al., 1984).

The General Hospital of Paris was the expression of the authorities' interest in the welfare of the homeless and poor who relied on alms for survival. To the authorities and those members of the public unlikely to ever find themselves inside the hospital it was seen as an excellent, efficient means of dispensing charity to more poor people at a lower cost. The function of the hospital was to allow the poor to “learn to live a life of dignity” (Geremek, 1994, p.222). The hospital was also a privilege, as only residents of Paris were eligible to receive its charity. The program of “treatment” that this hospital offered was religious instruction and work – every resident capable of working did so, or faced expulsion. This enforced labour aspect did not, in the eyes of the better-off public, contradict the charity function of the institution, because in no way did the labour of the “imprisoned poor” bring the hospital any profit; it was purely for the dignity it bestowed, the morals it instilled and the respect it gave them for honest work (Geremek, 1994). But not everyone felt the hospitals were such benevolent institutions, especially those who found themselves inside, or likely to be inside them. The next section will deal with these negative perceptions of the hospital as a place of confinement, isolation and punishment – the hospital as a prison.

Hospitals as places of imprisonment

The role of hospitals as a place for keeping people who are seen as a nuisance at large in public has quite a long history. In the middle ages, hospitals were sometimes used by the upper-class in growing trading towns as a place to keep beggars overnight and ensure the peace was kept (Granshaw, 1994). Even recently, the idea of hospitals being a place to “put away” certain groups has surfaced in the public consciousness. During the inter-war period in Britain institutions for the sub-normal, formerly known as “colonies”, started to be called “hospitals” and were established in more remote areas than most mental institutions (ed. Poynter, 1964). Although the establishment of mental hospitals through the 18th and 19th centuries was not based so much in rural or remote areas, their function was still, from the point of view of the public and the patients, the removal from society and imprisonment of dangerous lunatics. It was not until after 1800, when the medical profession started to take an interest in mental disease, that the isolation aspect was thought to be a useful part of treatment (Porter, 1994).

The immediate predecessor of the General Hospital of Paris was the “Hospital for the Imprisoned Poor”, created in 1611 (Geremek, 1994, p.221). Giving alms and begging were both made criminal offences, the former punished by a fine and the latter by being sent to the Hospital. The General Hospital was created in 1656 and absorbed many existing foundations under its auspices (Geremek, 1994, p.223). In a sense it was the first in a series of punishments – expulsion from the hospital or repeated convictions for begging landed vagrants in prisons like other criminals. The lower classes of Paris saw the hospitals as part of the general repression of the poor by the authorities. As part of the program of enforced labour, inmates of the hospital could be hired by outside ventures such as builders. In return for their labour the inmates would receive a fraction of their pay, the rest going to the hospital (Geremek, 1994, p.222, 225). This antagonised the working class – in addition to the high unemployment rate, the hospitals were seen to be making their position seem ever more precarious with the unfair competition they offered for their jobs. The activities of the hospital made it ever more likely that they would end up there themselves (Geremek, 1994, p.226-227). By the 18th century, however, the higher classes too were starting to see the hospital less as a benevolent charitable institution and demanded humanitarian reform. A new institution called a dépôt de mendicité had been created in other parts of France for the imprisonment of professional beggars and the General Hospital was accused of being no better than these establishments (Geremek, 1994, p.228). The hospital still continued to function as a prison, however, for those convicted of begging, while for the unemployed it functioned as a sort of employment office where work was found for them on public projects. Eventually the prison function of the General Hospital was taken over by actual detention centres (Geremek, 1994, p.228).

Hospitals as places of religion

Although as time went on, hospitals began less and less to resemble churches and municipal authorities took over the administrations of many institutions, hospitals continued to perform some of the functions of churches and be established and run by religious groups until relatively recently. Even though it only admitted the sick, he Hôtel Dieu's main function was religious. The Catholic church saw the saving of souls, repentance and forgiveness of sins as the primary function of the hospitals it established and ran. For the Hospitallers and hospitals established according to their code, a sick person could be seen as Christ and therefore the hospital as a monastery (Risse, 1999). The Hôtel Dieu's administration kept records of conversions but non-Catholic babies didn't appear on either the birth or death registers of the 18th century (Richmond, 1961). Joerger (1980) explains that hospitals were a significant part of the Catholic reconquest of France; because they were the ultimate form of charity and the most obvious demonstration of the Catholic belief in salvation through deeds, a much higher ratio of hospitals to population was usually found in Protestant regions. From this we can reasonably extrapolate that, for the Catholic church, one of the main functions of a hospital was to inspire an awe in the public that would create conversions and save souls, as well as attend to the pastoral needs of the patients within the hospital.

One of the significant groups of hospital founders in the newly-explored American west during the 19th century were religious orders, mostly Catholic nursing nuns. These nursing sisters established hospitals for religious reasons and their hospitals performed religious functions – it brought them close to the working migrant population who were predominantly Catholic and put them in a position to impress and possibly convert Protestants (Nelson, 2001). Especially in such a harsh environment as the frontier, hospitals were places to carry out good works of the spirit as well as the body. Missionary work could be a deciding factor as to whether a nursing order would take up a hospital contract (Nelson, 2001, p.104). On the east coast, Catholic nursing orders were often the only nurses brave enough to help during epidemics. As well as being moved to help the suffering, another strong religious motivation for them was to return souls to God – many of the worst affected were migrant Irish living in the poorest slums (Nelson, 2001, p.40). For the nuns, physical care sprang from a spiritual call that for them permeated all their hospital work. For a patient to die without grace was a failure on their part (Nelson, 2001). The Daughters of Charity, the order established by Vincent de Paul and Louise de Marillac, saw themselves as working with God every day because the poor and needy were representatives of Christ (Daniel-Rops, 1961).

Conclusion

As we have seen, the perceived functions of hospitals varied enormously throughout history, depending on whether the person was a patron or a patient, whether they were likely to ever end up in the hospital, or the perception of the people who went into hospitals. The lofty ideals of hospital founders, from the private philanthropists who funded 18th century voluntary hospitals to the authorities and upper-class of Paris at the time of the Hospital for the Imprisoned Poor, were often very differently interpreted by the people who were the beneficiaries of their works. From churches to charities, places of conversion to prisons, bad for ones health or good for it, only recently has the hospital become widely accepted to be the place of medical technology and expertise we know it today. Despite this fundamental shift in perception of the ideal role of the hospital, the perception that it can still be bad for your health persists, as it has done for at least the last 300 years.


WORKS CONSULTED

Ayliffe G.A.J., Babb, J.R., Taylor L.J. (1999) Hospital-acquired Infection. 3rd edn. Butterworth Heinemann, Oxford.

Barnes, E. (1961) People in Hospital. Macmillan, London.

Brown, K. (2005) 'Call to check surgical deaths', The Australian, 6 June 2005, p.6.

Cartwright, F.F. (1977) A Social History of Medicine. Longman, London.

Daniel-Rops, H. (1961) Monsieur Vincent. Hawthorn Books, New York,

Geremek, B. [trans. A. Kolakowska] (1994) Poverty: A History. Blackwell, Oxford.

Granshaw, L. (1994) 'The rise of the modern hospital in Britain'. In: Medicine in Society, ed. A. Wear. Cambridge University Press, Cambridge. p197-218.

Joerger, M. (1980) 'The Structure of the Hospital System in France in the Ancien Régime'. In: Medicine and Society in France ed. R. Forster and O. Ranum [trans. E. Forster and P. Ranum]. John Hopkins University Press, Baltimore. p104-136.

McKay, J.P., Hill, B.D., Buckler, J. (1984) A History of World Societies. Houghton Mifflin Company, Boston.

Nelson, S. (2001) Say Little, Do Much. University of Pennsylvania Press, Philadelphia.

Porter, R. (1994) 'The Patient in England, c. 1660-1800'. In: Medicine in Society, ed. A. Wear. Cambridge University Press, Cambridge. p91-118.

Poynter, F.N.L. ed. (1964) The Evolution of Hospitals in Britain. Pitman Medical Publishing, London.

Richmond, P.A. (1961) 'The Hôtel-Dieu of Paris on the Eve of the Revolution'. Journal of the History of Medicine and Allied Sciences. 16: 335-353.

Risse, G. (1994) 'Medicine in the age of Enlightenment'. In: Medicine in Society, ed. A. Wear. Cambridge University Press, Cambridge. p149-195.

Risse, G. (1999) Mending Bodies, Saving Souls. Oxford University Press, Oxford.

Rosen, G. (1976) A History of Public Health. MD Publications, New York.

Sand, R. (1952) The Advance to Social Medicine. Staples Press, London.

Sydney Hospital,(1918-1919) An Appeal by the Sydney Hospital for a Peace Offering of £100,000 .

The role of organisations in organisational birth – a brief examination of organisational theory

ARTS2000 essay, Summer Session 2005-6

While aspects of organisations such as structure and growth have been widely examined and discussed by theorists, the process and circumstances of organisational birth has received very little attention and analysis. Even significant authors of organisational theory seemingly dismiss organisational birth as an event of minor significance, which , if included at all, is often illustrated by a neat 'just-so' story of far less than universal application or significance. This essay examines theories of organisational birth and contrasts these theories with a real world example. I intend to show that organisational theory is deficient in both description and explanation of organisational birth, with a specific focus on theoretical description of the role of established organisations in the birth of new organisations.

'Organisational birth' will be defined here as the formation of a new, discrete organisational entity, in particular a formal organisation. I use the term 'formal organisation' in the sense outlined by Blau and Scott (1966), to describe an organisation with an explicit structure and goals that have been predefined in anticipation of the organisations activities. Organisations can be formed in a variety of ways, depending on the kind of organisation to be established, the reasons for doing so and the circumstances under which it comes to be. The establishment of a political party will be different to the establishment of a for-profit company, which will be different again to the establishment of a charity. Differences include the legal framework each is subject to for official recognition of organisational status, as well as the provisions they must submit to in the course of operation. The number and kind of individuals or even other organisations involved in planning and establishment, who may contribute resources and knowledge, and the political, cultural and economic conditions at the time of establishment are also potential factors.

I wish to contrast theories of organisational birth found in standard texts with a real life example, drawn from an Australian report of an enquiry into a specific independent, not-for-profit administrative sector (The Simpson Report, 1995). This report took submissions from a number of organisations on a variety of topics, but the part of particular interest is the discussion and submissions that resulted in the report recommending the formation of a new organisation in this particular sector. Almost all organisations in the sector are non-government, not-for-profit and limited by guarantee, working for the benefit of their members; types of organisations that one would think of as being formed by a individuals of particular group for their own mutual benefit. Yet the key actors, who were instrumental in proposing and then realising the birth of a new, independent organisation, were member organisations of the same sector, advocacy groups and the government. Does existing organisational theory accommodate this account of organisational birth?

Existing theories of organisational birth are generally brief, inconsistent from author to author and not very useful, especially when applied to the real world and compared to specific examples of organisational birth. The theoretical description of organisational birth varies markedly between authors largely, I suspect, as a result of their academic background. Blau and Scott (1966) and Scott (2003), for instance, come from a sociological perspective, while it is obvious that Jones (1995) writes from a business or economic point of view. These differences in background, combined with the relative lack of attention these authors (especially the first and the last) give to organisational birth, might go some way in explaining the indifferent state of knowledge in this area.

Blau and Scott (1966) are representative of authors from a sociological background writing during the 60s and 70s. Their writing concerns itself with formal organisations, how these organisations interact with society and vice versa. Organisational birth is not discussed in any great depth, but rather is illustrated by an example in the introduction to the book in spectacularly simplistic fashion. A group of people come together to accomplish some task, but must first define the tasks of each individual and their relationship to the other members of the group. Such an organisation, once established, might be distinguished by the independence of the organisation from the individuals that work within it (Blau and Scott, 1966). Nowhere again in the book do the authors mention the creation of an organisation again. This lack of interest in organisational birth is repeated in the works of other authors from a similar period, such as Dunkerley (1972) and Sofer (1972). Other aspects of organisations are discussed in great depth and have many studies to draw on, but far less significance seems to be placed on how these organisations came to exist in the first place.

Jones (1995) is broadly representative of one tradition in organisational theory that is mostly concerned with business, especially for-profit business. Organisations are assumed to operate in the economic marketplace, be they businesses, charities or even the nation's defence force (Jones, 1995). This view sees organisations as tools, structures for creating “value”, and also as the products of human invention.

The important thing to remember, however, is that organizations are human creations: They reflect our hopes, desires, motivations, and vision of ourselves and the world. The way they operate and the results of their behaviour are the products of the way we govern them and of the social, institutional, and political structure within which they operate. (Jones 1995, p.6)

For this value-driven model, organisational birth occurs when individuals (entrepreneurs) come together to make a new opportunity to create value. Interestingly, Jones does mention other organisations in connection with the birth of new organisations – he believes that existing organisations are a source of new entrepreneurs, who will leave their employers to set up their own organisation. Also, by establishing a new organisation to the pattern of an accepted organisational form, entrepreneurs confer legitimacy on their new organisations (Jones, 1995). Despite these useful observations, Jones cites no evidence from studies of organisational birth, but does give a little 'just-so' vignette in his section of organisational birth with such meaningless examples as, “Michael Dell found a new way to market low-priced computers to customers,” (Jones, 1995: p.421).

In light of the real-world case we can immediately see that some of Jones' fundamental assumptions about the creation of new organisations seem a little tenuous. The case described by the Simpson Report (1995) seems to show a group of related organisations, not individuals, contributing to the establishment of a new organisation, and from the report itself it does not appear that any employees of the existing organisations left their posts to work permanently in the new one. But that knowledge was transferred between the existing organisations, and that they were pro-actively involved in this process, does seem to support the idea of existing organisations being important as a source of knowledge and precedent for new organisations. Also, the structure of the new organisation would have been formalised very early on, if not at the beginning, which would contradict another of Jones' assertions, that young organisations have a high degree of structural flexibility and become more formalised as they get older (Jones, 1995).

Scott's (2003) text is more comprehensive than the two previously examined, as he attempts to give an overview of the field of organisational theory in general. Scott divides the multitude of disparate writings and fundamental assumptions and divides them into three broad fields of system theories; rational, natural and open. Scott's background is more academic, which might explain the greater breadth of theory in his text as compared to Jones, or even his earlier work with Blau (1963). The more important points to note from this text are Scott's own assertion that organisational birth has been largely ignored by theorists and that organisations are increasingly playing roles in the birth of new organisations (Scott, 2003: p.169-170). Although he concedes that a rational, formal and goal-orientated definition of organisations is the most dominant in the field, Scott also declares that “goals are not the key to understanding the nature and functioning of organizations... we will miss the essence of the organization if we insist on focusing on any single feature in isolation,” (Scott, 2003: p.24). Similar to Jones, Scott says that most organisational structures are borrowed rather than invented. Scott also gives two possible reasons for organising: the rationalist view that organisations are created to co-ordinate complex tasks of an administrative or technical nature; and the natural systems view that organisations are demonstrations of formal rationality, created out of approved structures to enact known and approved procedures (Scott, 2003).

While the absence of simplistic or case examples in Scott's work is refreshing, the specific theoretical content of his section on organisational birth still fails to investigate the significance and possible consequences of organisational involvement in organisational birth. The natural systems observations about formal rationality is of potential significance to this area, as are other observations about organisational behaviour in general, but none have as yet been linked to the actions of organisations involved in the creation of new organisations. Scott makes an important start by at least mentioning that organisations are increasingly involved (Scott, 2003), but he fails to contribute anything significant or new to the area of organisational birth.

The involvement of organisations in the formation of other organisations is not unknown, and indeed there are some obvious examples. Industries come together to establish lobby groups, or even organisations for the creation of industry-wide standards, such as the Personal Computer Memory Card International Association (Stair & Reynolds, 2006). Universities spawn research and development companies and technology transfer companies, such as Uniquest, the technology transfer company for the University of Queensland (Uniquest, 2006). The idea of organisations creating other organisations does not receive much attention in any of the writings discussed above, and for no apparent good reason. The economic view seems to hold the establishment of organisations by individuals as self-evident. Entrepreneurs are cast as kinds of geniuses, spotting opportunity to which all others have been blind. The sociological view also sees the establishment of organisations by individuals as the norm, although certain authors, such as Scott (2003) have recognised that organisations can play a part in organisational birth.

Contributing to organisational birth is an overlooked area of organisational behaviour in general, although it remains to be seen whether some existing observations made in literature on organisational theory could be usefully extended to this subject. I believe that it is a significant issue, especially where organisations are founded to deal with social problems or for charitable causes. The natural systems view given in Scott (2003), that formal rationality ('doing it right') tends to outweigh function rationality ('getting things done') could be usefully combined with observations made by Colebatch and Larmour (1993), that organisations tend to recognise and frame problems according to their own organisational perspective, structure and goals. The practical implications of these observations could be that implicit in the structure and function of new organisations, founded by existing organisations, is an attitude of co-operation or even co-dependency between the new and the old. It is widely accepted that organisations seek to stabilise the environment they operate in, and this clearly extends to the population of organisations one inhabits and deals with. Any of these theoretical observations could provide a starting point for investigating the events dealt with in the Simpson Report, but using current organisational, there is no precedent or acceptable way to draw meaning out of the real-world observations of the actions of these organisations and their involvement in an organisational birth.

To conclude: the involvement of organisations in the birth on new organisations is a very poorly recognised phenomenon in organisational theory, and its implications more poorly recognised still. More study will need to be done in the areas of organisational birth and organisational behaviour, and also to link the two, before real world examples, such as that described in the Simpson Report (1995) can be usefully understood.


References

Blau, P & Scott, W R (1966) Formal Organizations. Routledge and Kegan Paul, London.

Colebatch, H & Larmour, P (1993) Market, Bureaucracy and Community. Pluto Press, London.

Dunkerley, D (1972) The Study of Organizations. Routledge and Kegan Paul, London.

Jones, G (1995) Organizational Theory: Texts and Cases. Addison-Wesley, Reading.

Scott, W R (2003) Organizations: Rational, Natural and Open Systems. 5th edn. Prentice Hall, Upper Saddle River.

Sofer, C (1972). Organizations in Theory and Practice. Heinemann Educational Books, London.

Stair, R & Reynolds, G (2006) Fundamentals of Information Systems. 3rd edn. Thompson Course Technology, Boston.

The Simpson Report: Review of Australian Copyright Collection Societies (1995). Department of Communications, Information Technology and the Arts, Canberra. ONLINE: http://www.dcita.gov.au/ip/publications/ip_publications/the_simpson_report_review_of_australian_copyright_collecting_societies [accessed 2/3/06]

Uniquest (2006) http://www.uniquest.com.au [accessed 26/3/06].

What then must we do?

HPSC2500 essay, S1 2006

While forming a consensus about what kind of action is sustainable is often impossible, it can be easy to get people to agree about activities that are unsustainable, that consume resources that we cannot replace. Opinion can then be further divided between those who believe that when irreplaceable resources run out, we will find a way to substitute or work around that problem with technology or ingenuity, and those who believe that this is not a viable way to manage non-renewable resource depletion. There is much controversy over the “environmental crisis”. Who's fault is it? Who is responsible for fixing it? What are its exact parameters? Is there only one crisis, or are there many different crises, from smaller, local or regional problems, like deforestation or salinity, to larger, global problems like ozone depletion or greenhouse emissions?

While some commentators might frame the problem as a matter of looking at the world through the world ethical framework, or even a problem of the intellectual framework we use, the way we understand the relationship between language and the world (cf. Szerszynski, 1996). Any given problem can be stated in as many different ways as any given solution[1]. I do not want to go into specifics, but would rather address the problem pragmatically - given that we humans as a species seem to be living unsustainably, what can we do to change this?

My position is based on a number of assumptions, the key assumption being in direct contradiction to Szerszynski (1996), that many people already derive meaning from the world and their beliefs that inform their ideas of what to do. Our beliefs in themselves do not cause problems of sustainability, but our behaviours do. I also assume that the people who live the least sustainably are satisfied with the way they live and will be resistant to change. For this reason I think that change must be positive, focusing on the things that people can do rather than the things they can't. Furthermore, I think that the focus for change should be on individuals, ordinary people. While we can learn to think from an ecocentric point of view, ultimately we can only ever act as people. I believe people have the right to think and believe whatever they like, but that we each have a responsibility to act the best way. “Best” will mean many things to many people, but I think that a part of any given person's “best” behaviour should be that their actions are sustainable, and indeed that this notion of “best” can find a place in people's existing beliefs, regardless of who they are or where they live.

In fostering change I want to focus on three general strategies, aimed at those of us in the developed world who live the least sustainably. The first is creating awareness – helping people to realise the extent of their influence, just how large their ecological footprint is and how widely it ranges over the world. The second is to foster responsibility, getting people to contribute to resource and technology issues that affect them and their community, such as water supply, transport and electricity and to accept the consequences of those decisions. The third, least specific but perhaps most important, strategy is to create broad cultural change, for example by examining prestige and desirability and finding ways to make people's aspirations sustainable. We may also need to examine our expectations of rights and responsibilities for sustainability.

“Rights” are a contentious notion, so I will rephrase this paragraph as a quick review of behaviours that are widely agreed to be either necessary or acceptable. These would include meeting our individual or collective basic needs, such as water, shelter, nutrition and education. We could also add to this list good health, freedom of self-expression and choice about your individual lifestyle, the work you do and the government that has jurisdiction over you. It is also generally seen as acceptable to have whatever you can pay for, especially when referring to material goods. But how far does money go toward dissolving responsibility? I don't think there is anything fundamentally wrong with saying that we can have whatever we can pay for, but perhaps we need to expand our ideas of how we can “pay” for the things we want. As well as being responsible for handing over money for products or services, we need to include responsibility for accepting the physical realities and consequences of creating those things.

Awareness

To a certain extent, people are already responsible for accepting the way products are created. We are given the responsibility for using our “sovereignty” as consumers to supposedly influence everything, from labour conditions in the factories where our clothes are made to the meat content of a meat pie. However, the overwhelming influence we consumers have had is to make things cheap and available. Labelling is meant to help us make informed decisions, so that we can buy the thing we believe is right or best, but when it gets down to a purchase decision, most often the only label that is useful to us is the price tag. This is not to say that all consumers don't care about the production of the things they buy, but there is no one set of facts that all people will base their decisions on. Even the company responsible for putting information on a label may not know how the components of their product were made.

With the internet making so much information readily available, we might expect the average person to be more informed about their activities and the impact they have on the world. But this information is only available to us when we're surfing the net, as opposed to when we're switching on the heater or eating out at a fast food restaurant. Given that people can only make choices based on the information they have at hand, how can we make people more aware of what their choices mean? Labels can give you information on the country the product was made in, but what about the origin of the components or even the raw materials those components were made from? The life-cycle of the products we consume, from televisions to food, is obscured to the consumers who are given the responsibility for choosing not just the products but the production practices that made them.

Building on the idea of ecological footprint calculators, I can imagine a small portable device that estimates a person's ecological footprint based on lifestyle and purchasing information that a person inputs at the device's request. The concept would be similar in design to a tamagotchi pet[2], in that the interaction could be initiated by the user whenever they felt like it and the device, which could attract the user's attention at various points during the day to offer advice and information about whatever they were doing at the time. Similar to labelling, which is meant to have an effect at the point of purchase, the device could have an effect at the point of choice for behaviour, with a potentially profound effect on a person's lifestyle.

I think consumer sovereignty has the potential to be a much more powerful force for change than at present, but that in order for people to use their influence they need much more information than they presently have access to, and need to have access to that information at the time they make their choices. We need to come up with creative, timely ways of giving people information they can use, preferably in such a way that reinforces their sense of autonomy and responsibility rather than dictates how they should live their lives.

Responsibility

The aim of awareness is to allow people to be more responsible with their choices and actions. The next step is giving people more responsibility for their choices and actions, by giving them a bigger role in making decisions and contributing to the systems they use and are affected by. This will also hopefully help overcome the limitations of those systems (cf. Evans et al., 1999). Another form of responsibility I would like to see fostered is for users to accept the consequences of their consumption. The non-monetary costs of developments, would need to be acknowledged and accepted by the consumers of the utility – part of the price of electricity, for example, could be that you end up living near a wind farm. But people would also need to have a say in the kind of facility or development that was proposed for their area, and have the opportunity to suggest alternatives that might eliminate the need for development altogether.

Ideally, people taking responsibility for the externalities of their consumption could even prompt a move away from centralised utilities, especially if a very direct approach was taken to supply, by choosing to locate sites where demand is the greatest. By creating awareness of the need for and the nature of development, people may even be able to suggest solutions that are more appropriate for their area or community. The more involved people are in the systems that they use, the more ownership and pride they can feel in them and about the responsibility they have, to accept what negatives there are and to encourage those who build and maintain the system to do so in a way that is socially and environmentally responsible.

Cultural change

Moving away from individual values and beliefs, what can we say about the broad cultural values we hold as a nation? Owning a house or car is something many people aspire to – indeed, in Sydney people who already own a house might aspire to owning an investment property as well. We seem to prefer to own personal goods like televisions or computers, even though we know that we'll probably throw out the one we have in a few years when it breaks, or when we want to upgrade to a better one. Making people aware of unsustainable production practices and patterns of consumption and responsible for what they consume is a start, but what alternatives do we have for the way we consume goods in the present economic system?

Changing our cultural values, our beliefs about what is “best” or desirable to do at a group level, rather than as individuals, will be complicated but necessary for sustainability. As I said earlier, I think it is important to assume that people like the way they live at present, and that this applies equally to the people who live the least sustainably. For this reason it is vital to be aware of the cultural momentum that drives peoples' aspirations and behaviours and to find ways to work with it, rather than against it. It is also vital to recognise that no one single strategy will work for all people, and for any given strategies there will be people for whom it works and can effect a positive change in their behaviours, and people for whom it likely has the opposite effect. So appealing to people to use less electricity to save themselves money might work for those who are keen to cut their costs of living, but push the message too hard to the wrong audience and being wasteful with energy can become a sign of wealth and luxury.

Take progress, for instance. It is very appealing to be able to say, as a nation, “We have the tallest building in the world,” or, “Our scientists and engineers are the best in the world and we have a proud history of innovation and invention.” Progress has a lot of momentum as a cultural ideal, so we should learn to work with it and manage it, rather than abandoning it altogether. In this way, sustainability can become a point of national pride. Danes, for instance[3], can be proud of their wind generators and see them as the cutting edge of sustainable electricity generation, stuck all over the country like monuments to the public's belief in alternative energy. What if progress did become identified with sustainability (or vice versa) rather than economic growth, or the biggest, most complicated technology? I would argue that for many people who are passionate about the environment, it already is.

The link between belief and behaviour is, I feel, a bit of a chicken-and-egg problem – what does come first, the thought or the act? Although I agree with Szerszynski (1996) that the way to resolve the modern problematic cannot come through understandings of environmentalism or ecology that we have now, I disagree entirely with the suggestion that the way out of the mess we've made is to simply rethink the relationship between language and the world. The beliefs that people hold already inform them as to what they should do. Instead of solving the academic problems of modernity, problems with little scope for a solution that ordinary people can participate in, we need to help people overcome the barriers that prevent them acting sustainably and in harmony with their beliefs, by giving them information they can use and letting them act upon it.

References

Evans, R, Guy, S & Marvin, S (1999) 'Making a difference: sociology of scientific knowledge and urban energy politics.' Science, Technology and Human Values, 24:105-135.

Szerszynski, B (1996) 'On Knowing What to Do: Environmentalism and the Modern Problematic.' In: S Lash, B Szerszynski and B Wynne (eds), Risk, Environment and Modernity: towards a new ecology. Sage, London. pp.104-138.



[1] For a quick example, we could take the case of a polluted river. The problem could be variously stated as a lack of water treatment, or poor management of a factory on the river, or bad agricultural practice, or lack of respect for the river's intrinsic value. A solution, for example fining the party responsible for the pollution, could be variously seen as revenue raising by the government, a demonstration of a clearly inadequate reactive system of pollution control that needs to be replaced with a preventative system or an effective deterrent that will reduce the likelihood of pollution in the future by giving businesses on the river a wake-up call to improve their effluent control systems.

[2] A little electronic toy, a “virtual pet”, which first appeared in Japan in the 1990s and was briefly but wildly popular in Australia. It requires maintenance to be kept happy and healthy and would alert you with a noise if it was “hungry” or wanted to play and so on. The unique feature of this toy was that you had to keep on taking care of it and interacting with it, or it would “die”.

[3] The following is based on my experience living in Denmark in 2002.

Does the Car have a Future?

HPSC3300 seminar report, S1 2006

How you envisage the 'car crisis' depends greatly on who you are, where you live and how you use your car. While many urban car commuters see congestion and air pollution as cause for concern, urban planners might take a more sinister view of cars as a technology that segregates the rich from the poor, people from jobs and communities from one another. Potential solutions take many forms, from the individual level, such as fitting a catalytic converter to every car, to the community level, like designing New Urban communities based on accessibility rather than automobility. But while we may see the effects of the car crisis as obvious, the underlying causes are not as easy to address. To come up with innovative, workable solutions we need to take into account not only the problems with the car itself, but problems with the system it is embedded in.

So how did we get here? Various historians and urban planners point out that the push for suburbanisation (often seen as part of the reason for the car crisis) was not created by the car, nor is the car a prerequisite for suburban development (Farrelly, 2006; Hovenden, 1983; Steadman, 1999). In Sydney, for example, early suburban expansion was helped along by bus routes provided by the developers, which connected the new housing estates with the city centre (Hovenden, 1983). Competition between such privately-operated buses and government-run trams spurred government regulation of mass transit services and eventually the boundary between state government and private transport was defined – in Sydney before World War II this resulted in a state government monopoly on mass transit and government regulation of private (car) transport (Hovenden, 1983). The trend toward people accommodating the car, as opposed the car accommodating people, was obvious when attempts to educate the public on road safety in the 1930s were not also matched with attempts to make the roads safer for the public (Hovenden, 1983).

But even if suburbanisation wasn't the original cause of the car crisis in the city, it is certainly a major contributor today. Steadman (1999) points out that where large amounts of inexpensive land surround a city (for example in the US and Australia), car-based suburbs rapidly colonise that land, creating vast tracts of urban sprawl. This sprawl, by virtue of its low population density, makes public transport useless and means that any travel, even to pop out for lunch from a suburban workplace, requires a car journey (Steadman, 1999). From the 1980s, office developments followed residents out into the suburbs, creating cities within cities. These 'edge-cities' are sometimes even larger than the original CBD, but with the crucial difference that they are not well, or even at all, served by public transport, meaning that traffic congestion is a city-wide problem, not just a quirk of the CBD (Steadman, 1999).

Attempts to curb the dangerous side-effects of cars, such as air pollution, do work to some extent, but seem ultimately doomed to failure. Steadman's (1999) figures show that all the gains made in reducing exhaust emissions are eventually offset, or going to be offset, by the increase in car usage. Focusing on the activities of individual cars, then, will at best only ever be a war of attrition. Addressing the problems of pollution and traffic congestion can only successful by reducing the amount of car usage – this means using cars to transport more people, less frequently. But does it mean getting rid of the car altogether?

The combination of low population densities and large areas of land being devoted to single uses prompts some people to look to urban planning for ways to end the car crisis and car dependency. This reaction against suburbanisation is expressed in many ways, such as “accessibility planning” or “New Urbanism” (Cervero, 1997), the “compact city” (Steadman, 1999), or simply “urbanism” (Farrelly, 2006). The central idea that these movements (if they aren't all just different names for the same movement) have in common is that neotraditional urban design, incorporating high population densities and fine-grained, mixed-use zoning, will result in vibrant communities whose residents can get around wherever they need to go without having to drive a car. “It is people and places that matter, not transportation,” says Cervero (1997, p. 9). Steadman (1999) suggests that nostalgia may also play a part in this kind of urban redesign. The village of Poundbury, in Dorset, UK, is an excellent example of this kind of confluence of ideas, where modern urban planning, based around the pedestrian as the unit of transportation, is executed with architecture in the style of the area before cars moved in (Mitchell, 2006). Winding, narrow roads make it difficult to drive, while for the residents, the small block size and structure of their houses make it nearly impossible to house a large vehicle.

The implication of urbanism seems to be that changing the built environment of human space, making dwellings smaller and closer together and mixing shops and offices in with them, will ultimately result in positive changes to the transport system. Reconfigure the environment, and the behaviours will change accordingly. But Cervero's accessibility planning seems like a very complicated way to go about effecting change. Instead of changing the way people live, simply changing the road that passes through their community can have a dramatic effect on their behaviour and prompt changes to the human built environment they live in. In the case of West Palm Beach, Florida, USA, main roads were redesigned to make them less friendly to traffic speed and volume (McNichol, 2004). Changes in the road led to changes in the human dynamic, with increased pedestrian activity, decreased car use and also improved the land value, attracting new development in the form of shops and apartments. “From the beginning,” writes McNichol, “a central premise guiding American road design was that driving and walking were utterly incompatible modes of transport, and that the two should be segregated as much as possible,” (2004, p.110). Accessibility planning as Cervero describes it (1997, p. 10, table 1) does not seem to transcend this assumption that roads cannot be shared by motorised and non-motorised transport*.

The community-based transport initiatives of urbanism only address the movements of people in the course of their ordinary activities, like commuting to and from work and shopping. Here in Sydney, as well as peak-hour congestion there is also the weekend traffic jam to get in and out of the city. The leisure functions of the car will be difficult if not impossible to replace with community or public transport. This huge cultural and historical impact of automobility,the expectation of a holiday as a journey, for leisure time to be spent away from both home and work, will not be easily undone. The desire to “see the country”, especially in Australia, still persists. Not so long ago it was almost a rite of passage to travel the entire country, or at least the mainland, by car. Steadman (1999) points out that long distance car travel is being increasingly replaced by air travel, and also that typically air travel is less fuel efficient than car travel (depending on how many passengers there are). Perhaps for this reason, then, completely eliminating the car would be counter-productive. The new vision of the future might be a reversal of the situation today – large cities emptied of cars, while the roads connecting them are filled with cars and congestion.

Whatever comes of transport initiatives, petrol prices or emissions controls, the cultural significance of the car as a symbol and the meanings it holds for people cannot be erased quickly or completely. But the future will be different, and if the past is anything to go by, the future of the car will depend on the meanings we attribute to it and which of those meanings survives longest, whether it be the status symbol, the expression of personal freedom or the device used in large groups to prevent travel.

References

Cervero, R (1997) 'Paradigm Shift: from automobility to accessibility' Urban Futures Journal, 22. pp. 9-20.

Farrelly, E (2006) 'More reason than ever to fight for human cities' Sydney Morning Herald, 17 May 2006. p. 15.

Hovenden, L (1983) 'The Impact of the Motor Vehicle, 1900-39' In: Gary Wotherspoon (ed.), Sydney's Transport: Studies in Urban History. Hale and Iremonger, Sydney. pp. 139-154.

McNichol, T (2004) 'Roads gone wild' Wired, 12(12). pp. 108-110.

Mitchell, S (2006) 'Prince Charles – not your typical radical' National Geographic, May 2006. pp. 96-115.

Steadman, P (1999) 'The “Car Crisis” in the Late Twentieth-century City' In: Gerrylyn K Roberts & Philip Steadman, American Cities and Technology: Wilderness to Wired City. The Open University, Milton Keynes. pp. 201-233.



* Cervero specifically lists “bicycle and pedestrian paths” as part of accessibility planning. To a certain extent, the roads we have now would function perfectly well for this purpose (see McNichol, 2004). I find myself agreeing with Hans Monderman (McNichol, 2004) about the traditional design of roads saying to drivers that they can go as fast as they like without worrying about what's happening around them. Dedicated bicycle/footpaths also suffer from this basic design flaw, and worse. You don't put tractors together with cars on motorways for much the same reason as you shouldn't put pedestrians and bicycles together on narrow paths.

Conventionalising the organic system (and composting the conventional)

HPSC3300 essay, S1 2006

Organic agriculture is usually viewed as distinct from, if not in total opposition to, conventional agriculture. Over the last 30 years this method of food production has received increasing attention from a variety of actors, each with their own understanding of what “organic” means. Various developments in organic agriculture have led some academics and organic practitioners to express the view that organic farming is gradually being eroded and “conventionalised”, rendering it ultimately no different to the mainstream industrial agriculture it sought to avoid. Lockie and Halpin (2005, p.284) call this the “conventionalisation thesis”. This essay proposes to examine organic agriculture as a technical system and trace its development as such, and to apply concepts from the social construction of technology (SCOT) to try to uncover the different interpretations of “organic” that coexist and often conflict with each other.

In English-speaking countries we tend to talk about the organic movement as a unified system that deliberately positioned itself outside mainstream agriculture more on the basis of belief than science. The name of Lady Eve Balfour is usually mentioned in the founding of the organic movement, along with a person of the the appropriate nationality who helped popularise organic ideas and possibly founded a Soil Association in their home country[1]. The other system of alternative agriculture, better known perhaps in Europe (especially German-speaking Europe), is the biodynamic system, based on the ideas of Rudolf Steiner. While both movements overlap to some extent in purpose and technique and both have traditionally worked together in the common cause of alternative agriculture, this essay will focus on the organic movement, specifically in the context of the United States, United Kingdom and New Zealand.

Organic agriculture as a technical system

Although founded on scientific ideas, organic agriculture appears on the surface to be not particularly technological at all, shunning synthetic inputs and seemingly disdaining of all the huge technological advances of industrial agriculture. But we can understand organics better as a technological system if we use Pacey's (1983) expanded meaning of “technology” and see the organic system as made up of interdependent technical, organisational and cultural aspects, which are all three evident in the practice of technology. Using this definition, we could track the changes in the organic system over time by noting how the technical, organisational and cultural knowledge, practices and beliefs have shifted. It can also help us explain how different groups frame the organic system differently – we could, for instance, argue that certain groups privilege certain types of definitions, and that for this reason the United States Department of Agriculture (USDA) will tend to define “organic” technically (food production without synthetic inputs), whereas sociologists might frame the “organic movement” in terms of the organisational forms it has taken (say, smaller farmers and self-sufficient communities) and the beliefs of its adherents (stewardship of the land, local food and so on).

In addition, Hughes' (1999) ideas about the evolution of technological systems helps us understand the development of the organic system. Identifying individual system builders might prove a little trickier, especially when we take into account the consumers who helped shape the evolving system quite directly in the US (cf. Goodman, 2000a).

The story of the organic system for its first 30-odd years is fairly similar in the US, UK and New Zealand. In the British Empire in the 1930s and 40s, in reaction to the use of synthetic fertilisers in agriculture, some scientists and influential people take up and promote the idea that soil health is fundamental to the quality of food and therefore human health (Clunies-Ross & Cox, 1994). Through groups like Soil Associations, the idea of organic farming as a holistic practice which avoids synthetic chemicals is acted out, mainly by small farmers and gardeners. Being a minimal input method, the counter-culture movements of the 60s and 70s embraced organic practices in their own efforts toward self-sufficiency (Guthman, 1998). It is around the late 70s and early 80s that “conventionalisation” could be thought to have begun, with regional groups like the Soil Associations and other alternative growers' associations mobilising in their local areas to form regulatory bodies to oversee organic agriculture as the market for organic produce began to grow.

Conventionalising the organic system

The organic system, like any technological system, is open, allowing the entrance and exit of actors. From the 80s various actors who had previously very little to do with organic agriculture began to play a part, from the Ministry of Agriculture, Fisheries and Food (MAFF) in the UK to the USDA in America and various retailers, exporters and manufacturers of food products around the world. The process of regulation, begun at the local level, changed with the foundation of the International Federation of Organic Agriculture Movements (IFOAM) and became formally enshrined in national legislations of various countries, either through food marketing bodies or agricultural departments.

In the UK the process of self-regulation began in much the same way as New Zealand (cf. Campbell & Stuart, 2005), with three related organisations in the form of the Soil Association, Henry Doubleday Foundation and the Bio-Dynamic Agricultural Association coming together to establish third-party certification groups for organic agriculture (Clunies-Ross & Cox, 1994). Conflict between the certification groups and the Soil Association meant that there were two different standards of organic produce being sold within and exported from the UK. Formalisation and closure, when it eventually came, was in the form of the United Kingdom Register of Organic Food Standards, as part of the national food marketing body (Clunies-Ross & Cox, 1994).

The MAFF became interested in organic farming when policy shifted to reducing production in an attempt to reduce agricultural surpluses. Organic farming was also seen as a solution to the problem of conventional farming in nitrate-sensitive areas and treating organic farming as a less-intensive production regime sat better with the MAFF, traditionally promoters of intensification (Clunies-Ross & Cox, 1994). From a systems perspective, one could say that the MAFF, an agency of conventional agriculture with much invested in the conventional agricultural system, framed organic agriculture as an innovation to help deal with the reverse salients of the conventional system, namely land degradation and the economic problem of surpluses.

In New Zealand there was never an ideological split between organic pragmatists and purists the way there was in the UK (cf. Clunies-Ross & Cox, 1994, pp.63-64), but Campbell and Stuart (2005) note the perhaps equally important discursive event, when the meaning of “organic” shifted from the farmer to the food. This change occurred at the same time as the sites of production and consumption divided, so that the consumer of organic food was no longer also themselves a producer.

Standards for production locally and for export were never given over to the NZ government in any form, but remained in the hands of the NZ Biological Producers Council (subsequently renamed BIO-GRO NZ). Campbell and Stuart (2005) remark on the changing nature of those standards, from locally negotiated, flexible rules that were highly contingent on local conditions and problems of production and implemented with the intention of continuously “raising the bar”, as it were, of organic agriculture. But with the involvement of international bodies such as IFOAM and pressure to harmonise standards with other exporting nations, organic standards became more abstract and universal and less local, with less appreciation of the local production reality.

Formal regulation and standards also gave other, more economically inclined actors the confidence to enter the organic system. Organisations from the conventional food system, like processors and manufacturers, started to recognise and respond to the demand for organic products. Processors such as Heinz Watties began to encourage conventional producers to convert to organic production, but without any significant change in the way that they sourced their produce. In the case of Heinz Watties, this meant a reduction in the number of organic suppliers over the years, while in general the size of these suppliers grew, a trend which seems to display the economic logic of the conventional food system, with a concentration of suppliers close to the site of processing while the suppliers themselves grow as they are rewarded for increasing efficiency and economies of scale (Lockie et al, 2000).

In the US the organic and conventional systems were initially much better differentiated at the organisational level of food processing and especially retailing, with the vast majority of organic food sales taking place through dedicated alternative or health food retailers (Boström & Klintman, 2006). A plethora of different certification systems and labelling schemes for organics abounded in the US, which made the idea of a national standard appealing to some members of the organic industry[2]. Legislation was successfully passed in 1990 that gave responsibility for creating and overseeing the national standards to the USDA. Considering the USDA is a powerful state institution, heavily invested in the conventional system of agriculture and food production, this was bound to have some interesting consequences for the organic system in the US.

The power to define organic practice and processes was then definitively taken away from organic groups in the US. The primary purpose of the National Organic Program (NOP) standards set by the USDA was for the facilitation of trade, rather than ensuring the quality of food or adherence to any views on agricultural practices (Boström & Klintman, 2006). But while the USDA refused to commit to any ideas about organic agriculture being safer or better for the environment than conventional agriculture (Goodman, 2000a), the language they used to frame the NOP standards and regulations was very much the language of risk assessment, things that could be measured and known (Vos, 2000). Combined with the historical retailing of organic foods through health-food stores, there is clear potential here for organic practices and labelling to be confused with health standards in a way that isn't perhaps as obvious in the UK, Australia or New Zealand. However, the reductionist perspective of the organic system as just conventional farming but without the use of synthetic chemicals meant that the USDA saw fit to define a National List of proscribed and allowable inputs, which initially included genetically modified organisms (GMOs), irradiation and sewerage sludge as “allowable inputs” under the NOP (Vos, 2000)[3].

Resolution of this controversy over the precise meaning of 'organic' came from neither the organic groups nor application of authority by the USDA, but from the consumers of organic produce, the general public whose confidence the labelling was intended for in the first place. In public consultation the backlash against the inclusion of the “big three” (GMOs, irradiation and sewage sludge) was to huge to be ignored – the USDA subsequently bowed to public opinion and removed the offending inputs from the National List (Boström & Klintman, 2006).

On the other side of the fence...

Looking outside of the organic system for a moment, it is becoming increasingly clear that changes are afoot in the conventional system of food production as well. Could the conventional system be becoming, well, less conventional?

Especially if we characterise the organic system as sustainable, focused on achieving yields now and for the future, and the conventional system as productivist and therefore interested in optimising yields now without worrying too much about the future (Clunies-Ross & Cox, 1994), we can see that some more sustainable techniques and methods are creeping into conventional practice. Soil conservation and health appear to be the new focus in agricultural research, as land degradation becomes a bigger and more visible issue (Macilwain, 2004, Anonymous, 2004). As consumers demand less pesticide use, less toxic and more target-specific methods are sought to control major pests, especially in the fresh produce industries (Macilwain, 2004). Eventually, alternative production standards adopted by the conventional sector, such as Integrated Pest Management (IPM) and Quality Assurance (QA) might receive their own labelling and branding schemes to capitalise on these environmentally-friendly and sustainable techniques (Lockie et al, 2000).

From a technological systems point of view, then, we could say that once the culture of groups in the conventional system changes, either from consumer end with demands for fewer chemicals in food, or from the production end with farmers who want to rehabilitate their land or themselves use fewer chemicals, then technical side of conventional food production will change too. If sustainability or health is the reverse salient of the system, innovations will be brought in to fix it.

The more things change, the more they stay the same?

The second part of the conventionalisation thesis is that the organic system will eventually split into two, with commercial producers joining the conventional food system as “certified organic” while smaller organic farmers, excluded from the formal process by the high cost of becoming certified, will continue to sell organic produce locally, on a trust system (Lockie & Halpin, 2005). Various observers think that this will be the case (for example, Goodman, 2000b), but a study of organic and conventional farmers in Australia shows that no such process of “conventionalisation” is presently occurring (Lockie & Halpin, 2005). For now, at least, there is no difference between small and large organic farmers in terms of their reasons for farming organic.

One of the more interesting results of the Lockie and Halpin (2005) study was that the attitudes, motivations and beliefs of organic and conventional farmers were not significantly different. Depending on how the organic system is characterised, either as in direct opposition to conventional farming or as totally ignored and simply passed over by the conventional system (for example, Clunies-Ross & Cox, 1994 and Campbell & Stuart, 2005), we immediately have two different pictures of farmers. On the assumption of adversity, organic farmers occupy some sort of moral high ground, being the only principled, concerned producers of food we have. It would be very easy to see conventional farmers as unprincipled, uncaring vandals of the environment, lured to the side of organic production not for environmental or ethical reasons, but motivated only by profit. However, on the assumption of ignorance, the results of the Australian study (Lockie & Halpin, 2005) begin make sense. As far as farmers go, in the developed world at least, there aren't really two kinds of farmers, one with principles and one without.

So if the values of the social movements that embraced organics haven't been lost at the farm level (and indeed, probably never left), what then might “conventionalisation” mean? I would argue that as the organic system has grown, organic food is increasingly being consumed by people who never held the various values ascribed to organic methods and foods in the first place, and the way food is taken from the farm to the consumer reflects this. At the level of processing, manufacturing, retailing and consumption, the organic system and the conventional system have become intertwined and are unlikely to untangle without a radical shift in the organisation and culture of food. But if “conventionalisation” simply means becoming more mainstream, and describes the idea that organics is turning into a legitimate production system instead of an alternative or radical system, then perhaps conventionalisation is not an unintentional side-effect at all, but the culmination of a long-standing project within the organic movement itself, to be recognised as a mainstream farming method and contribute to agricultural policy making (cf. Clunies-Ross & Cox, 1994, p. 66).

In a sense, both the organic and conventional systems are descended from the same parent system of pre-industrial agriculture. Tracking the development of the two, it could be said that while the conventional system is quite mature, the organic system has just come into its growth phase, with the area of land under organic management rising and the number of regulating bodies, processors and retailers involved in overseeing, manufacturing and selling organic foods rising. Both systems have borrowed innovations from each other to fix their respective problems, with the organic system borrowing a formal regulation and labelling structure from conventional agriculture, while the conventional system borrows and adapts soil management and pest control techniques. To use a mathematical metaphor, the set of technical, organisational and cultural elements that comprises the organic system is no longer mutually exclusive of the set of elements that makes up the conventional system.

Unlike other technological systems, such as the telephone or car, which have at their heart a definite, tangible artifact, the organic system is not as easy to define or even accurately delineate. The site of meaning of “organic” has shifted between the organic person, to the organic food (Campbell & Stuart, 2005) and even the organic social movement. As more power is accorded to certification agencies and marketing bodies to define the technicalities of what makes something organic, it seems likely that the meaning of “organic” will be stabilised by the consumers' understandings of organic food, whatever that may turn out to be.


References

Anonymous (2004) 'Organic farming enters the mainstream.' Nature, 428(6985):783.

Boström, M & Klintman, M (2006) 'State-centred versus nonstate-driven organic food standardization: A comparison of the US and Sweden.' Agriculture and Human Values, 23:163-180.

Clunies-Ross, T & Cox, G (1994) 'Challenging the Productivist Paradigm: Organic Farming and the Politics of Agricultural Change' In: Regulating Agriculture, eds Phillip Lowe, Terry Marsden & Sarah Whitmore. David Fulton Publishers, London. pp.53-74.

Campbell, H & Stuart, A (2005) 'Disciplining the organic commodity' In: Agricultural Governance, eds Vaughan Higgins & Geoffrey Lawrence. Routledge, London. pp.84-97.

Goodman, D (2000a) 'Regulating organic: A victory of sorts.' Agriculture and Human Values, 17(3):212-213.

Goodman, D (2000b) 'Organic and conventional agriculture: materializing discourse and agro-ecological managerialism.' Agriculture and Human Values, 17(3):215-219.

Guthman, J (1998) 'Regulating meaning, appropriating nature: the codification of California organic agriculture.' Antipode, 30(2):135-154.

Hughes, T P (1999) 'The Evolution of Large Technological Systems' In: The Science Studies Reader, ed. Mario Biagioli. Routledge, New York.

Lockie, S & Halpin, D (2005) 'The “conventionalisation” thesis reconsidered: structural and ideological transformation of Australian organic agriculture.' Sociologia Ruralis, 45(4):284-307.

Lockie, S, Lyons, K & Lawrence, G (2000) 'Constructing “green” foods: Corporate capital, risk, and organic farming in Australia and New Zealand.' Agriculture and Human Values, 17(4):315-322.

Macilwain, C (2004) 'Organic: Is it the future of farming?' Nature, 428(6985):792-793.

Pacey, A (1983) The Culture of Technology. Blackwell, Oxford.

Vos, T (2000) 'Visions of the middle landscape: Organic farming and the politics of nature.' Agriculture and Human Values, 17(3):245-256.



[1] For example, see Clunies-Ross & Cox (1994), Vos (2000) and Campbell & Stuart (2005) for a quick overview of the organics movements and national founders in the UK, US and New Zealand respectively.

[2] The way this is recounted is always fairly vague, but I have noticed that authors all always specify the organic “industry”, as opposed to, say, the social movement, as responsible for lobbying the US government for a set of national standards for organics. So it would be fair to guess that whether or not these lobbyists had a strong commitment to organic principles, they also had a strong commitment to expanding and establishing the US domestic and export market. However, the exact groups who lobbied for national standards in the US and their motivations do not appear to have been examined at all, let alone in any great depth. For example, see Guthman (1998), Vos (2000) and Boström & Klintman (2006) for totally unsatisfactory accounts of how the USDA got put in charge of regulating organics in the US.

[3] The inclusion of GMOs as allowable inputs in the first draft of the NOP Proposed Rule meant that the issue of the definition of 'organic' also became valid for advocates of GM food labelling. The organic label was (and still is at present) a de facto 'GM free' label in the US, where the mandatory labelling of foods containing genetically modified ingredients has never been instituted, unlike the EU, Australia and NZ.