Monday, December 12, 2016

More on cephalopod minds


When I first posted on cephalopod intelligence a year or so ago, I assumed it would be a one-off diversion into the deep blue sea (link). But now I've read the fascinating recent book by Peter Godfrey-Smith, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, and it is interesting enough to justify a second deep dive. Godfrey-Smith is a philosopher, but he is also a scuba diver, and his interest in cephalopods derives from his experiences under water. This original stimulus has led to two very different lines of inquiry. What is the nature of the mental capacities of an octopus? And how did "intelligence" happen to evolve twice on earth through such different pathways? Why is a complex nervous system an evolutionary advantage for a descendent of a clam?

Both questions are of philosophical interest. The nature of consciousness, intelligence, and reasoning has been of great concern to philosophers in the study of the philosophy of mind. The questions that arise bring forth a mixture of difficult conceptual, empirical, and theoretical issues: how does consciousness relate to behavioral capacity? Are intelligence and consciousness interchangeable? What evidence would permit us to conclude that a given species of animal has consciousness and reasoning ability?

The evolutionary question is also of interest to philosophers. The discipline of the philosophy of biology focuses much of its attention on the issues raised by evolutionary theory. Elliott Sober's work illustrates this form of philosophical thinking -- for example, The Nature of Selection: Evolutionary Theory in Philosophical Focus, Evidence and Evolution: The Logic Behind the Science. Godfrey-Smith tells an expert's story of the long evolution of mollusks, in and out of their shells, with emerging functions and organs well suited to the opportunities available in their oceanic environments. One of the evolutionary puzzles to be considered is the short lifespan of octopuses and squid -- just a few years (160). Why would the organism invest so heavily in a cognitive system that supported its life for such a short time?

A major part of the explanation that G-S favors involves the fact that octopuses are hunters, and a complex nervous system is more of an advantage for predator than prey. (Wolves are more intelligent than elk, after all!) Having a nervous system that supports anticipation, planning, and problem solving turns out to be an excellent preparation for being a predator. Here is a good example of how that cognitive advantage plays out for the octopus:
David Scheel, who works mostly with the giant Pacific octopus, feeds his animals whole clams, but as his local animals in Prince William Sound do not routinely eat clams, he has to teach them about the new food source. So he partly smashes a clam and gives it to the octopus. Later, when he gives the octopus an intact clam, the octopus knows that it’s food, but does not know how to get at the meat. The octopus will try all sorts of methods, drilling the shell and chipping the edges with its beak, manipulating it in every way possible … and then eventually it learns that its sheer strength is sufficient: if it tries hard enough, it can simply pull the shell apart. (70)
Exploration, curiosity, experimentation, and play are crucial components of the kind of flexibility that organisms with big nervous systems bring to earning their living.

G-S brings up a genuinely novel aspect of the organismic value of a complex nervous system: not just problem-solving applied to the external environment, but coordination of the body itself. Intelligence evolves to handle the problem of coordinating the motions of the parts of the body.
The cephalopod body, and especially the octopus body, is a unique object with respect to these demands. When part of the molluscan “foot” differentiated into a mass of tentacles, with no joints or shell, the result was a very unwieldy organ to control. The result was also an enormously useful thing, if it could be controlled. The octopus’s loss of almost all hard parts compounded both the challenge and the opportunities. A vast range of movements became possible, but they had to be organized, had to be made coherent. Octopuses have not dealt with this challenge by imposing centralized governance on the body; rather, they have fashioned a mixture of local and central control. One might say the octopus has turned each arm into an intermediate-scale actor. But it also imposes order, top-down, on the huge and complex system that is the octopus body. (71)
In this picture, neurons first multiply because of the demands of the body, and then sometime later, an octopus wakes up with a brain that can do more. (72)
This is a genuinely novel and intriguing idea about the creation of a new organism over geological time. It is as if a plastic self-replicating and self-modifying artifact bootstrapped itself from primitive capabilities into a directed and cunning predator. Or perhaps it is a preview of the transition that artificial intelligence systems embodying adaptable learning processes and expanding linkages to the control systems of the physical world may take in the next fifty years.  

What about the evolutionary part of the story? Here is a short passage where Godfrey-Smith considers the long evolutionary period that created both vertebrates and mollusks:
The history of large brains has, very roughly, the shape of a letter Y. At the branching center of the Y is the last common ancestor of vertebrates and mollusks. From here, many paths run forward, but I single out two of them, one leading to us and one to cephalopods. What features were present at that early stage, available to be carried forward down both paths? The ancestor at the center of the Y certainly had neurons. It was probably a worm-like creature with a simple nervous system, though. It may have had simple eyes. Its neurons may have been partly bunched together at its front, but there wouldn’t have been much of a brain there. From that stage the evolution of nervous systems proceeds independently in many lines, including two that led to large brains of different design. (65)
The primary difference that G-S highlights here is the nature of the neural architecture that each line eventually favors: a central cord connecting periphery to a central brain; and a decentralized network of neurons distributed over the whole body.
Further, much of a cephalopod’s nervous system is not found within the brain at all, but spread throughout the body. In an octopus, the majority of neurons are in the arms themselves— nearly twice as many as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch, but also the capacity to sense chemicals— to smell, or taste. Each sucker on an octopus’s arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, like reaching and grasping. (67)
So what about the "alien intelligence" part of G-S's story? G-S emphasizes the fact that octopus mentality is about as alien to human experience and evolution as it could be.
Cephalopods are an island of mental complexity in the sea of invertebrate animals. Because our most recent common ancestor was so simple and lies so far back, cephalopods are an independent experiment in the evolution of large brains and complex behavior. If we can make contact with cephalopods as sentient beings, it is not because of a shared history, not because of kinship, but because evolution built minds twice over. This is probably the closest we will come to meeting an intelligent alien. (9)
This too is intriguing. G-S is right: the evolutionary story he works through here gives great encouragement for the idea that an organism in a complex environment and a few bits of neuronal material can evolve in wildly different pathways, leading to cognitive capabilities and features of awareness that are dramatically different from human intelligence. Life is plastic and evolutionary time is long. The ideas of the unity of consciousness and the unified self don't have any particular primacy or uniqueness. For example: 
The octopus may be in a sort of hybrid situation. For an octopus, its arms are partly self—they can be directed and used to manipulate things. But from the central brain’s perspective, they are partly non-self too, partly agents of their own. (103)
So there is nothing inherently unique about human intelligence, and no good reason to assume that all intelligent creatures would find a basis for mutual understanding and communication. Sorry, Captain Kirk, the universe is stranger than you ever imagined!

Thursday, December 8, 2016

French sociology


Is sociology as a discipline different in France than in Germany or Britain? Or do common facts about the social world entail that sociology is everywhere the same?

The social sciences feel different from physics or mathematics, in that their development seems much more path-dependent and contingent. The problems selected, the theoretical resources deployed, the modes of evidence considered most relevant -- all these considerations have to be specified; and they have been specified differently in different times and places. An earlier post considered the arc of sociology in France (link).

Johan Heilbron's French Sociology has now appeared, and it is a serious effort to make sense of the tradition of sociology as it developed in France. (Jean-Louis Fabiani's Qu'est-ce qu'un philosophe français? provides a similar treatment of philosophy in France; link.) Heilbron approaches this topic from the point of view of historical sociology; he wants to write a historical sociology of the discipline of sociology.
For this historical-sociological view I have adopted a long-term perspective in order to uncover patterns of continuity and change that would have otherwise remained hidden. Several aspects of contemporary French sociology—its position in the Faculty of Letters, for example—can be understood only by going back in time much further than is commonly done. (2)
Understanding ideas is not merely about concepts, theories, and assumptions—however important they are—it simultaneously raises issues about how such ideas come into being, how they are mobilized in research and other intellectual enterprises, and how they have, or have not, spread beyond the immediate circle of producers. Understanding intellectual products, to put it simply and straightforwardly, cannot be divorced from understanding their producers and the conditions of production. (3)
Heilbron traces the roots of sociological thinking to the Enlightenment in France, with the intellectual ethos that any question could be considered scientifically and rationally.
If the Enlightenment has been seen as a formative period for the social sciences, it was fundamentally because a secular intelligentsia now explicitly claimed and effectively exercised the right to analyze any subject matter, however controversial, independently of official doctrines. (13)
This gives an intellectual framework to the development of sociology; but for Heilbron the specifics of institutions and networks are key for understanding the particular pathway that the discipline underwent. Heilbron identifies the establishment after the Revolution of national academies for natural science, human science, and literature as an important moment in the development of the social sciences: "The national Académie des sciences morales et politiques (1832) became the official center for moral and political studies under the constitutional regime of the July monarchy" (14). In fact, Heilbron argues that the disciplines of the social sciences in France took shape as a result of a dynamic competition between the Academy and the universities. Much of the work of the Academy during mid-nineteenth century was directed towards social policy and the "social question" -- the impoverished conditions of the lower classes and the attendant risk of social unrest. There was the idea that the emerging social sciences could guide the formation of intelligent and effective policies by the state (20).

Another major impetus to the growth of the social sciences was the French defeat in the Franco-Prussian War in 1870. This national trauma gave a stimulus top the enhancement of university-based disciplines. The case was made (by Emile Zola, for example) that France was defeated because Prussia had the advantage in science and education; therefore France needed to reform and expand its educational system and research universities.
Disciplinary social science now became the predominant mode of teaching, research, and publishing. University-based disciplines gained a greater degree of autonomy not only with respect to the national Academy but also vis-à-vis governmental agencies and lay audiences. Establishing professional autonomy in its different guises—conceptually, socially, and institutionally—was the main preoccupation of the representatives of the university-based disciplines. (30)
Heilbron pays attention to the scientific institutions through which the social sciences developed in the early twentieth century. Durkheim's success in providing orientation to the development of sociology during its formative period in the early twentieth century rested in some part on Durkheim's ability to create and sustain some of those institutions, including especially the L'Année sociologique. Here is Heilbron's summary of this fact:
Because the Durkheimian program eclipsed that of its competitors and obtained considerable intellectual recognition, sociology in France did not enter the university as a science of “leftovers,” as Albion Small said about American sociology. Durkheimian sociology, quite the contrary, represented a challenging and rigorous program to scientifically study crucial questions about morality, religion, and other collective representations, their historical evolution and institutional underpinnings. (90)
Here is a graph of the relationships among a number of the primary contributors to L'Année sociologique during 1898-1912:



But Heilbron notes that this influence in the institutions of publication in the discipline of sociology did not translate directly or immediately into a primary location for the Durkheimians within the developing university system.

Heilbron's narrative confirms a break in the development of sociology at the end of World War II. And in fact, it seems to be true that sociology became a different discipline in France after 1950. Here is how Heilbron characterizes the intellectual field:
Sociological work after 1945 was caught up in a constellation that was defined by two antagonistic poles: an intellectual pole represented by existentialist philosophers who dominated the intellectual and much of the academic field and a policy-related research pole in state institutes for statistical, economic, and demographic studies. (123-124)
An important part of the growth of sociology in France in this period was stimulated by practical needs of policy reform and economic reorganization. It was in part because of a lack of intellectual status that the demand for applied research came to fulfill a new function for the social sciences. The growth of applied social science research was produced by the needs of economic recovery and the new role of the state in that respect. (129)
But academic sociology did not progress rapidly:
In the postwar academic structure, sociology was still a rather marginal phenomenon, a discipline with little prestige that was institutionally no more than a minor for philosophy undergraduates. The leading academics were the two professors at the Sorbonne, Georges Davy and Georges Gurvitch, each of whom presided over his own journal. Davy had succeeded Halbwachs in 1944 and resumed the publication of the Année sociologique, assisted by the last survivors of the Durkheimian network. (130)
Assessing the situation in 1955, Alain Touraine observed a near-total separation between university sociology and empirical research. Researchers were isolated, he wrote, and they lacked solid training, research experience, and professional prospects. Their working conditions, furthermore, were poor. The CES had only three study rooms for almost forty researchers and neither the CES nor the CNRS provided research funding. (139)
On Heilbron's account, the large changes in sociology began to accelerate in the 1970s. Figures like Touraine, Bourdieu, Crozier, and Boudon brought substantially new thinking to both theoretical ideas and research problems for sociology. In a later post I will consider his treatment of this period in the development of the discipline.

(Here is an earlier post discussing Gabriel Abend's ideas about differences in the discipline of sociology across the world; link.)

Thursday, December 1, 2016

Processual sociology


Andrew Abbott is one of the thinkers within sociology who is not dependent upon a school of thought -- not structuralism, not positivism, not ethnomethodology, not even the Chicago School. He approaches the problems that interest him with a fresh eye and therefore represents a source of innovation and new ideas within sociological theory. Second, he presents some very compelling intuitions about the social world when it comes to social ontology. He thinks that many social scientists bring unfortunate assumptions with them about the fixity of the social world -- assumptions about entities and properties, assumptions about causation, assumption about laws. And he shows in many places how misleading these assumptions are -- not least in his study of the professions The System of Professions: An Essay on the Division of Expert Labor (Institutions), but in his history of the Chicago School of sociology as well (Department and Discipline: Chicago Sociology at One Hundred). Processual Sociology presents his current thinking about some of those important ideas.

The central organizing idea of Processual Sociology is one that finds expression in much of Abbott's work, the notion that we should think of the social world as a set of ongoing processes rather than a collection of social entities and structures. He sometimes refers to this as a relational view of the actor and the social environment. Here is how he describes the basic ontological idea of a processual social world:
By a processual approach, I mean an approach that presumes that everything in the social world is continuously in the process of making, remaking, and unmaking itself (and other things), instant by instant. The social world does not consist of atomic units whose interactions obey various rules, as in the thought of the economists. Nor does it consist of grand social entities that shape and determine the little lives of individuals, as in the sociology of Durkheim and his followers. (preface)
This isn't a wholly unfamiliar idea in sociological theory; for example, Norbert Elias advocated something like it with his idea of "figurational sociology" (link). But Abbott's adherence to the approach and his sustained efforts to develop sociological ideas in light of it are distinctive. 

Abbott offers the idea of a "career" as an example of what he means by a processual social reality. A person's career is not a static thing that exists in a confined period of time; rather, it is a series of choices, developments, outcomes, and plans that accumulate over time in ways that in turn affect the individual's mentality. Or consider his orienting statements about professions in The System of Professions:
The professions, that is, make up an interdependent system. In this system, each profession has its activities under various kinds of jurisdiction. Sometimes it has full control, sometimes control subordinate to another group. Jurisdictional boundaries are perpetually in dispute, both in local practice and in national claims. It is the history of jurisdictional disputes that is the real, the determining history of the professions. Jurisdictional claims furnish the impetus and the pattern to organizational developments. Thus an effective historical sociology of professions must begin with case studies of jurisdictions and jurisdiction disputes. It must then place these disputes in a larger context, considering the system of professions as a whole. (kl 208)
His comments about the discipline of sociology itself in Department and Discipline have a similar fluidity. Rather than thinking of sociology as a settled "thing" within the intellectual firmament, we are better advised to look at the twists and turns various sociologists, departments, journals, conferences, and debates have made of the configuration during a period in time.

These examples have to do with the nature of social things -- institutions and organizations, for example. But Abbott extends the processual view to the actors themselves. He argues that we should look to the flow of actions rather than the actor (again, a parallel with Elias); so actions are as much the result of shifting circumstances as they are the reflective choices of unitary actors. Moreover, the individual himself or herself continues to change throughout life and throughout a series of actions. Memories change, desires change, and social relationships change. Individuals are "historical" -- they are embedded in concrete circumstances and relationships that contribute to their actions and movements at each moment. (This is the thrust of the first chapter of the volume.) Abbott extends this idea of the "processual individual" by reconsidering the concept of human nature (chapter 2).
For a processual view that begins with problematizing that continuity, an important first step is to address the concept of human nature, asking what kind of concept of human nature is possible under processual assumptions. (16)
Here is something like a summary of the view that he develops in this chapter:
Human nature, first, is rooted in the three modes of historicality—corporeal, memorial, and recorded—and the complex of substantive historicality that they enable. It concerns the means by which those modes interact and shape the developing lineage that is a person or social entity. It is also rooted in what we might call optativity, the human capacity to envision alternative futures and indeed alternative future units to the social process. (31-32)
Ecological thinking plays a large role in Abbott's conception of the social realm. Social and human arrangements are not to be thought of in isolation; instead, Abbott advocates that we should consider them in a field of ecological interdependence. A research library does not exist uniquely by itself; rather, it exists in a field of funding, institutional control, user demands, legal regulations, and public opinion. Its custodians make decisions about the purchase of materials based on the needs and advocacy of various stakeholders, and the operation and holdings of the research library are a joint product of these contextual influences. In an innovative move, Abbott argues that the ecology within which an institution like a library sits is actually a linked set of ecologies, each exercising influence over the others. So the library influences the publisher in the same activities through which the publisher influences the library. Here is a brief description of the idea of linked ecologies:
I here answer this critique with the concept of linked ecologies. Instead of envisioning a particular ecology as having a set of fixed surrounds, I reconceptualize the social world in terms of linked ecologies, each of which acts as a flexible surround for others. The overall conception is thus fully general. For expository convenience, however, it is easiest to develop the argument around a particular ecology. I shall here use that of the professions. (35)
The central topic for a sociologist in a processual framework is the problem of stability: given the permanent fact of change, how does continuity emerge and persist? This is the problem of order.
I am concerned to envision what kinds of concepts of order might be appropriate under a different set of social premises: those of processualism. As the first two chapters of this book have argued, the processual ontology does not start with independent individuals trying to create a society. It starts with events. Social entities and individuals are made out of that ongoing flow of events. The question therefore arises of what concept of order would be necessary if we started out not with the usual state-of-nature ontology, but with this quite different processual one. (200)
Here Abbott's thinking converges with several other sociologists and theorists whose work provides insights concerning the persistence of social entities, institutions, or assemblages. Abbott, Kathleen Thelen and Manuel DeLanda (link, link) agree about an important fundamental question: we must investigate the mechanisms and circumstances that permit social institutions, rules, or arrangements to persist in the face of the stochastic pressures of change induced by actors and circumstances.




Wednesday, November 30, 2016

DeLanda on historical ontology


A primary reason for thinking that assemblage theory is important is the fact that it offers new ways of thinking about social ontology. Instead of thinking of the social world as consisting of fixed entities and properties, we are invited to think of it as consisting of fluid agglomerations of diverse and heterogeneous processes. Manuel DeLanda's recent book Assemblage Theory sheds new light on some of the complexities of this theory.

Particularly important is the question of how to think about the reality of large historical structures and conditions. What is "capitalism" or "the modern state" or "the corporation"? Are these temporally extended but unified things? Or should they be understood in different terms altogether? Assemblage theory suggests a very different approach. Here is an astute description by DeLanda of historical ontology with respect to the historical imagination of Fernand Braudel:
Braudel's is a multi-scaled social reality in which each level of scale has its own relative autonomy and, hence, its own history. Historical narratives cease to be constituted by a single temporal flow -- the short timescale at which personal agency operates or the longer timescales at which social structure changes -- and becomes a multiplicity of flows, each with its own variable rates of change, its own accelerations and decelerations. (14)
DeLanda extends this idea by suggesting that the theory of assemblage is an antidote to essentialism and reification of social concepts:
Thus, both 'the Market' and 'the State' can be eliminated from a realist ontology by a nested set of individual emergent wholes operating at different scales. (16)
I understand this to mean that "Market" is a high-level reification; it does not exist in and of itself. Rather, the things we want to encompass within the rubric of market activity and institutions are an agglomeration of lower-level concrete practices and structures which are contingent in their operation and variable across social space. And this is true of other high-level concepts -- capitalism, IBM, or the modern state.

DeLanda's reconsideration of Foucault's ideas about prisons is illustrative of this approach. After noting that institutions of discipline can be represented as assemblages, he asks the further question: what are the components that make up these assemblages?
The components of these assemblages ... must be specified more clearly. In particular, in addition to the people that are confined -- the prisoners processed by prisons, the students processed by schools, the patients processed by hospitals, the workers processed by factories -- the people that staff those organizations must also be considered part of the assemblage: not just guards, teachers, doctors, nurses, but the entire administrative staff. These other persons are also subject to discipline and surveillance, even if to a lesser degree. (39)
So how do assemblages come into being? And what mechanisms and forces serve to stabilize them over time?  This is a topic where DeLanda's approach shares a fair amount with historical institutionalists like Kathleen Thelen (link, link): the insight that institutions and social entities are created and maintained by the individuals who interface with them, and that both parts of this observation need explanation. It is not necessarily the case that the same incentives or circumstances that led to the establishment of an institution also serve to gain the forms of coherent behavior that sustain the institution. So creation and maintenance need to be treated independently. Here is how DeLanda puts this point:
So we need to include in a realist ontology not only the processes that produce the identity of a given social whole when it is born, but also the processes that maintain its identity through time. And we must also include the downward causal influence that wholes, once constituted, can exert on their parts. (18)
Here DeLanda links the compositional causal point (what we might call the microfoundational point) with the additional idea that higher-level social entities exert downward causal influence on lower-level structures and individuals. This is part of his advocacy of emergence; but it is controversial, because it might be maintained that the causal powers of the higher-level structure are simultaneously real and derivative upon the actions and powers of the components of the structure (link). (This is the reason I prefer to use the concept of relative explanatory autonomy rather than emergence; link.)

DeLanda summarizes several fundamental ideas about assemblages in these terms:
  1. "Assemblages have a fully contingent historical identity, and each of them is therefore an individual entity: an individual person, an individual community, an individual organization, an individual city." 
  2. "Assemblages are always composed of heterogeneous components." 
  3. "Assemblages can become component parts of larger assemblages. Communities can form alliances or coalitions to become a larger assemblage."
  4. "Assemblages emerge from the interactions between their parts, but once an assemblage is in place it immediately starts acting as a source of limitations and opportunities for its components (downward causality)." (19-21)
There is also the suggestion that persons themselves should be construed as assemblages:
Personal identity ... has not only a private aspect but also a public one, the public persona that we present to others when interacting with them in a variety of social encounters. Some of these social encounters, like ordinary conversations, are sufficiently ritualized that they themselves may be treated as assemblages. (27)
Here DeLanda cites the writings of Erving Goffman, who focuses on the public scripts that serve to constitute many kinds of social interaction (link); equally one might refer to Andrew Abbott's processual and relational view of the social world and individual actors (link).

The most compelling example that DeLanda offers here and elsewhere of complex social entities construed as assemblages is perhaps the most complex and heterogeneous product of the modern world -- cities.
Cities possess a variety of material and expressive components. On the material side, we must list for each neighbourhood the different buildings in which the daily activities and rituals of the residents are performed and staged (the pub and the church, the shops, the houses, and the local square) as well as the streets connecting these places. In the nineteenth century new material components were added, water and sewage pipes, conduits for the gas that powered early street lighting, and later on electricity and telephone wires. Some of these components simply add up to a larger whole, but citywide systems of mechanical transportation and communication can form very complex networks with properties of their own, some of which affect the material form of an urban centre and its surroundings. (33)
(William Cronon's social and material history of Chicago in Nature's Metropolis: Chicago and the Great West is a very compelling illustration of this additive, compositional character of the modern city; link. Contingency and conjunctural causation play a very large role in Cronon's analysis. Here is a post that draws out some of the consequences of the lack of systematicity associated with this approach, titled "What parts of the social world admit of explanation?"; link.)



Sunday, November 27, 2016

What is the role of character in action?


I've been seriously interested in the question of character since being invited to contribute to a volume on the subject a few years ago. That volume, Questions of Character, has now appeared in print, and it is an excellent and engaging contribution. Iskra Fileva was the director of the project and is the editor of the volume, and she did an excellent job in selecting topics and authors. She also wrote an introduction to the volume and introductions to all five parts of the collection. It would be possible to look at Fileva's introductions collectively as a very short book on character by themselves.

So what is "character"? To start, it is a concept of the actor that draws our attention to enduring characteristics of moral and practical propensities, rather than focusing on the moment of choice and the criteria recommended by the ethicist on the basis of which to make choices. Second, it is an idea largely associated with the "virtue" ethics of Aristotle. The other large traditions in the history of ethics -- utilitarianism and Kantian ethics, or consequentialist and deontological theories -- have relatively little to say about character, focusing instead on action, rules, and moral reasoning. And third, it is distinguished from other moral ideas by its close affinity to psychology as well as philosophy. It has to do with the explanation of the behavior of ordinary people, not just philosophical ideas about how people ought to behave.  

This is a fundamentally important question for anyone interested in formulating a theory of the actor. To hold that human beings sometimes have "character" is to say that they have enduring features of agency that sometimes drive their actions in ways that override the immediate calculation of costs and benefits, or the immediate satisfaction of preferences. For example, a person might have the virtues of honesty, courage, or fidelity -- leading him or her to tell the truth, resist adversity, or keep commitments and promises, even when there is an advantage to be gained by doing the contrary. Or conceivably a person might have vices -- dishonesty, cruelty, egotism -- that lead him or her to act accordingly -- sometimes against personal advantage. 

Questions of Character is organized into five major sets of topics: ethical considerations, moral psychology, empirical psychology, social and historical considerations, and art and taste. Fileva has done an excellent job of soliciting provocative essays and situating them within a broader context. Part I includes innovative discussions of how the concept of character plays out in Aristotle, Hume, Kant, and Nietzsche. Part II considers different aspects of the problem of self-control and autonomy. Part III examines the experimental literature on behavior in challenging situations (for example, the Milgram experiment), and whether these results demonstrate that human actors are not guided by enduring virtues. Part IV examines the intersection between character and large social settings, including history, the market, and the justice system. And Part V considers the role of character in literature and the arts, including the interesting notion that characters in novels become emblems of the character traits they display.

The most fundamental question raised in this volume is this: what is the role of character in human action? How, if at all, do embodied traits, virtues and vices, or personal commitments influence the actions that we take in ordinary and extraordinary circumstances? And the most intriguing challenge raised here is one that casts doubt on the very notion of character: "there are no enduring behavioral dispositions inside a person that warrant the label 'character'." Instead, all action is opportunistic and in the moment. Action is "situational" (John Doris, Lack of Character: Personality and Moral Behavior; Ross and Nisbett, The Person and the Situation). On this approach, what we call "character" and "virtue" is epiphenomenal; action is guided by factors more fundamental than these.

My own contribution focuses on the ways in which character may be shaped by historical circumstances. Fundamentally I argue that growing up during the Great Depression, the Jim Crow South, or the Chinese Revolution potentially cultivates fairly specific features of mentality in the people who had these formative experiences. The cohort itself has a common (though not universal) character that differs from that of people in other historical periods. As a consequence people in those cohorts commonly behave differently from people in other cohorts when confronted with roughly similar action situations. So character is both historically shaped and historically important. Much of my argument was worked out in a series of posts here in Understanding Society

This project is successful in its own terms; the contributors have created a body of very interesting discussion and commentary on an important element of human conduct. The volume is distinctly different from other collections in moral psychology or the field of morality and action. But the project is successful in another way as well. Fileva and her colleagues succeeded in drawing together a novel intellectual configuration of scholars from numerous disciplines to engage in a genuinely trans-disciplinary research collaboration. Through several academic conferences (one of which I participated in), through excellent curatorial and editorial work by Fileva herself, and through the openness of all the collaborators to listen with understanding to the perspectives of researchers in other disciplines, the project succeeded in demonstrating the power of interdisciplinary collaboration in shedding light on an important topic. I believe we understand better the intriguing complexities of actors and action as a result of the work presented in Questions of Character.

(Here is a series of posts on the topic of character; link.)

Thursday, November 24, 2016

Coarse-graining of complex systems


The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of "coarse-graining" represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, "Coarse-graining of cellular automata, emergence, and the predictability of complex systems" (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let's see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:
We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.
This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the "rules that govern the large scale dynamics" -- rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits "averaging" of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3x3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3x3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might "coarse-grain" this information by replacing the household-level data with the majority representation of 3x3 grids of households. We might take another step of aggregation by considering 3x3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:
Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)
Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived ("extracted") from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven't yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:
The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)
Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, "Novel Algorithm for Coarse-Graining of Cellular Automata" (link). They describe "coarse-graining" in their abstract in these terms:
The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.
Here K&D seem to be expressing the view that the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level -- the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?

Saturday, November 19, 2016

SSHA 2016

Image: Palmer House lobby

The 41st annual meeting of the Social Science History Association is underway in Chicago this weekend. I've been a member since 1998, approaching half the lifetime of the association, and I continue to find it the most satisfying and stimulating of my professional associations. 

The association was founded to create an alternative voice within the history profession, and to serve as a venue for multi-disciplinary approaches to research and explanation in history. It is very interesting that many of the earliest advocates for this new intellectual configuration -- including some of the founders of the association -- are still heavily involved. Bill Sewell, Andrew Abbott, Myron Gutmann (the current president), and Julia Adams all illustrate the importance of interdisciplinary work in their own research and writing, and these social researchers have all brought important innovations into the evolving task of understanding the social world. 

SSHA programs have highlighted a diversity of ideas and approaches over the past four decades -- historical demography, bringing culture into politics, the value of the social-causal mechanisms approach, using spatial techniques (GIS) to help further historical understanding, and the role played by identities of race, gender, and nation in historical process. A renewed interest in Eurasian history and global history is also noteworthy in recent years, offsetting the tendency towards eurocentrism in the history profession more broadly. The approaches associated with comparative historical sociology have almost always had a prominent role on the program. Memorable sessions in previous years by Chuck Tilly, George Steinmetz, Liz Clemens, and Sid Tarrow stand out in my mind. And in recent years there has been a healthy interest in issues of philosophy of history and historiography expressed in the program.

Here is the SSHA's mission statement:

The Social Science History Association is an interdisciplinary group of scholars that shares interests in social life and theory; historiography, and historical and social-scientific methodologies. SSHA might be best seen as a coalition of distinctive scholarly communities. Our substantive intellectual work ranges from everyday life in the medieval world – and sometimes earlier -- to contemporary global politics, but we are united in our historicized approach to understanding human events, explaining social processes, and developing innovative theory.

The term “social science history” has meant different things to different academic generations. In the 1970s, when the SSHA’s first meetings were held, the founding generation of scholars took it to reflect their concern to address pressing questions by combining social-science method and new forms of historical evidence. Quantitative approaches were especially favored by the association’s historical demographers, as well as some of the economic, social and women’s historians of the time. By the 1980s and 1990s, other waves of scholars – including culturally-oriented historians and anthropologists, geographers, political theorists, and comparative-historical social scientists -- had joined the conversation.

SSHA is a self-organizing configuration of scholars, and the annual program reflects the interests and initiative of its members. It is organized around 19 networks, each of which is managed by one of more network representatives. The program consists of panels proposed by the networks, along with a handful of presidential sessions organized by the program committee to carry out the year's theme. (This year's theme is "Beyond social science history: Knowledge in an interdisciplinary world".) Here are the networks and the number of sessions associated with each:


There are several things I especially appreciate about sessions at SSHA. First, the papers and discussions are almost always of high quality -- well developed and stimulating. Second, there is a good diversity of participants across rank, gender, and discipline. And finally, there is very little posturing by participants. People are here because they care about the subjects and want stimulation, not because they are looking to make a statement about their own centrality in the field. There is a healthy environment based on an interest in learning and discussing at the SSHA, not the pervasive careerism of more discipline-based associations.

Sunday, November 13, 2016

DeLanda on concepts, knobs, and phase transitions

image: Carnap's notes on Frege's Begriffsschrift seminar

Part of Manuel DeLanda's work in Assemblage Theory is his hope to clarify and extend the way that we understand the ontological ideas associated with assemblage. He introduces a puzzling wrinkle into his discussion in this book -- the idea that a concept is "equipped with a variable parameter, the setting of which determines whether the ensemble is coded or decoded" (3). He thinks this is useful because it helps to resolve the impulse towards essentialism in social theory while preserving the validity of the idea of assemblage:
A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, 'assemblage', but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)
So "assemblage" does not refer to a natural kind or a social essence, but rather characterizes a wide range of social things, from the sub-individual to the level of global trading relationships. The social entities found at all scales are "assemblages" -- ensembles of components, some of which are themselves ensembles of other components. But assemblages do not have an essential nature; rather there are important degrees of differentiation and variation across assemblages.

By contrast, we might think of the physical concepts of "metal" and "crystal" as functioning as something like a natural kind. A metal is an unchanging material configuration. Everything that we classify as a metal has a core set of physical-material properties that determine that it will be an electrical conductor, ductile, and solid over a wide range of terrestrial temperatures.

A particular conception of an assemblage (the idea of a city, for example) does not have this fixed essential character. DeLanda introduces the idea that the concept of a particular assemblage involves a parameter or knob that can be adjusted to yield different materializations of the given assemblage. An assemblage may take different forms depending on one or more important parameters.

What are those important degrees of variation that DeLanda seeks to represent with "knobs" and parameters? There are two that come in for extensive treatment: the idea of territorialization and the idea of coding. Territorialization is a measure of homogeneity, and coding is a measure of the degree to which a social outcome is generated by a grammar or algorithm. And DeLanda suggests that these ideas function as something like a set of dimensions along which particular assemblages may be plotted.

Here is how DeLanda attempts to frame this idea in terms of "a concept with knobs" (3).
The coding parameter is one of the knobs we must build into the concept, the other being territorialisation, a parameter measuring the degree to which the components of the assemblage have been subjected to a process of homogenisation, and the extent to which its defining boundaries have been delineated and made impermeable. (3)
Later DeLanda returns to this point:
A different problem is that distinguishing between different kinds of wholes involves ontological commitments that go beyond individual entities. In particular, with the exception of conventionally defined types (like the types of pieces in a chess game), natural kinds are equivalent to essences. As we have already suggested, avoiding this danger involves using a single term, 'assemblage', but building into it parameters that can have different settings at different times: for some settings the social whole will be a stratum, for other settings an assemblage (in the original sense). (18)
This is confusing. We normally think of a concept as identifying a range of phenomena; the phenomena are assumed to have characteristics that can be observed, hypothesized, and measured. So it seems peculiar to suppose that the forms of variation that may be found among the phenomena need to somehow be represented within the concept itself.

Consider an example -- a nucleated human settlement (hamlet, village, market town, city, global city). These urban agglomerations are assemblages in DeLanda's sense: they are composed out of the juxtaposition of human and artifactual practices that constitute and support the forms of activity that occur within the defined space. But DeLanda would say that settlements can have higher or lower levels of territorialization, and they can have higher or lower levels of coding; and the various combinations of these "parameters" leads to substantially different properties in the ensemble.

If we take this idea seriously, it implies that compositions (assemblages) sometimes undergo abrupt and important changes in their material properties at critical points for the value of a given variable or parameter.

DeLanda thinks that these ideas can be understood in terms of an analogy with the idea of a phase transition in physics:
Parameters are normally kept constant in a laboratory to study an object under repeatable circumstances, but they can also be allowed to vary, causing drastic changes in the phenomenon under study: while for many values of a parameter like temperature only a quantitative change will be produced, at critical points a body of water will spontaneously change qualitatively, abruptly transforming from a liquid to a solid, or from a liquid to a gas. By analogy, we can add parameters to concepts. Addition these control knobs to the concept of assemblage would allow us to eliminate their opposition to strata, with the result that strata and assemblages (in the original sense) would become phases, like the solid and fluid phases of matter. (19)
These ideas about "knobs", parameters, and codes might be sorted out along these lines. Deleuze introduces two high-level variables along which social arrangements differ -- the degree to which the social ensemble is "territorialized" and the degree to which it is "coded". Ensembles with high territorialization have some characteristics in common; likewise ensembles with low coding; and so forth. Both factors admit of variable states; so we could represent a territorialization measurement as a value between 0 and 1, and likewise a coding measurement.

When we combine this view with DeLanda's suggestion that social ensembles undergo "phase transitions," we get the idea that there are critical points for both variables at which the characteristics of the ensemble change in some important and abrupt way.


W, X, Y, and Z represent the four extreme possibilities of "low coding, low territorialization", "high coding, low territorialization", "high coding, high territorialization", and "low coding, high territorialization". And the suggestion from DeLanda's treatment is that assemblages in these four extreme locations will have importantly different characteristics -- much as solid, liquid, gas, and plasma states of water have different characteristics. (He asserts that assemblages in the "high-high" quadrant are "strata", while ensembles at lower values of the two parameters are "assemblages"; 39.)

Here is a phase diagram for water:


There are five material states represented here, along with the critical values of pressure and temperature at which H20 shifts through a phase transition (solid, liquid, compressible liquid, gaseous, and supercritical fluid). (There is a nice discussion of critical points and phase transitions in Wikipedia (link).)

What is most confusing in the theory offered in Assemblage Theory is that DeLanda appears to want to incorporate the ideas of coding (C) and territorialization (T) into the notation itself, as a "knob" or a variable parameter. But this seems like the wrong way of proceeding. Better would be to conceive of the social entity as an ensemble; and the ensemble is postulated to have different properties as C and T increase. This extends the analogy with phase spaces that DeLanda seems to want to develop. Now we might hypothesize that as a market town decreases in territorialization and coding it moves from the upper right quadrant towards the lower left quadrant of the diagram; and (DeLanda seems to believe) there will be a critical point at which the properties of the ensemble are significantly different. (Again, he seems to say that the phase transition is from "assemblage" to "strata" for high values of C and T.)

I think this explication works as a way of interpreting DeLanda's intentions in his complex assertions about the language of assemblage theory and the idea of a concept with knobs. Whether it is a view that finds empirical or historical confirmation is another matter. Is there any evidence that social ensembles undergo phase transitions as these two important variables increase? Or is the picture entirely metaphorical?

(Gottlob Frege changed logic by introducing a purely formal script intended to suffice to express any scientific or mathematical proposition. The concept of proof was intended to reduce to "derivability according to a specified set of formal operations from a set of axioms." Here is a link to an interesting notebook in Rudolph Carnap's hand of his participation in a seminar by Frege; link.)

Sunday, November 6, 2016

Nine years of Understanding Society

image: Anasazi petroglyphs at Newspaper Rock

This week marks the ninth anniversary of Understanding Society -- 1105 posts to date, or over 1.1 million words. According to Blogger, over 7 million pageviews have flowed across screens, tablets, and phones since 2010.

The blog has been an ideal forum for me to continue to develop new ideas about the social sciences, and to reflect upon new contributions by other talented observers and practitioners of the social sciences. It is a material record for me of the topics that have been of interest to me over time, like points on a map outlining a driving trip through unfamiliar country. (The photo above was such a moment for me in 1996.) Each entry describes a single idea or insight; taken together, they compose a suggestive map of intellectual development and discovery. During the year I've gotten interested in topics as diverse as the early work of John von Neumann on computing (link, link), Reinhart Koselleck's approach to the philosophy of history (link), quantum computing (link, link), China's development policies (link), and cephalopod philosophy (link). I've continued to work on some familiar topics -- generativity, reduction, and emergence; character and plans of life; causal mechanisms; and critical realism.

It is interesting to see what posts have been the most popular over the past six years (the period for which Blogger provides data):


Key topics in the foundations of the social sciences appear on the list -- structure, power, pragmatism, poverty, mobility. But several novel topics make the top ten as well -- supervenience, assemblage theory, and hate. "What is a social structure?" was written during the first month of the blog. The top key words in searches are "social structure" and "social mobility".

Some of the philosophical ideas explored in the blog have crossed over into more traditional forms of academic publishing, including especially the appearance of New Directions in the Philosophy of Social Science earlier this fall. (Here is a site I've created to invite discussion of the book; link.) This book bears out my original hope that Understanding Society could become a "web-based dynamic monograph", with its own cumulative logic over time. In framing New Directions it was possible for me to impose a more linear logic and organization on the key ideas -- for example, actor-based sociology, generativity, causal mechanisms, social ontology. As I conceived of it in the beginning, the blog has proven to be a work of open-source philosophy.


A recurring insight in the blog is the basic fact of heterogeneity and contingency in the social world. One of the difficult challenges for the social sciences is the fact that social change is more rapid and more heterogeneous than we want to think. The founders of sociology, economics, and political science wanted to arrive at theories that would permit us to understand social processes in a fairly simple and uniform way. But the experience of the social world -- whether today in the twenty-first century or in the middle of the nineteenth century -- is that change is heterogeneous, contingent, and diverse. So the social sciences need to approach the study of the social world differently from the neo-positivist paradigm of "theory => explanation => confirmation". We need a meta-theory of social research that is more attentive to granularity, contingency, and heterogeneity -- even as we seek for unifying mechanisms and patterns. (The very first post in Understanding Society was on the topic of the plasticity of things in the social world.)

A new theme in the past year is the politics of hate. The emergence of racism, misogyny, and religious bigotry in the presidential campaign has made me want to understand better the social dynamics of hate -- in the United States and in the rest of the world. So an extended series of posts have focused on this topic in the past six months or so (link). This is a place where theory, philosophy, and social reality intersect: it is intellectually important to understand how hate-based movements proliferate, but it is also enormously important for us as a civilization to understand and neutralize these dynamics.

So thanks for reading and visiting Understanding Society! I know that without the blog my intellectual life would be a lot less interesting and a lot less creative. I am very appreciative of the many thoughtful visitors who read and comment on the blog from time to time, and I'm looking forward to discovering what the coming year will bring.

(Mark Carrigan's Social Media for Academics is a very interesting and current discussion of how social media and blogging have made a powerful impact on sociology. Thanks, Mark, for including Understanding Society in your work!)



Saturday, October 29, 2016

Coleman on the classification of social action


Early in his theoretical treatise of rational-choice sociology Foundations of Social Theory, James Coleman introduces a diagram of different kinds of social action (34). This diagram is valuable because it provides a finely granulated classification of kinds of social action, differentiated by the relationships that each kind stipulates among individuals within the interaction.

Here is how Coleman describes the classification system provided here:
Differing kinds of structures of action are found in society, depending on the kinds of resources involved in actions, the kinds of actions taken, and the contexts within which the actions are taken. (34)
Here is the legend for the diagram:

1. Private actions
2. Exchange relations
3. Market
4. Disjoint authority relations
5. Conjoint authority relations
6. Relations of trust
7. Disjoint authority systems
8. Conjoint authority systems
9. Systems of trust, collective behavior
10. Norm-generating structures
11. Collective-decision structures

The regions of the diagram are organized into a number of higher-order groups:

A. Purposive action
B. Transfer of rights or resources
C. Unilateral transfer
D. Rights to control action
E. System of relations
F. Events with consequences for many

For example, social events falling in zone 8 have these distinguishing characteristics: they involve a transfer of rights to control action, shifting through unilateral transfer within an existing system of relations. An example might include a party to divorce who surrenders his or her right to control whether the child is moved to another state. This would be a unilateral transfer of control from one party to the other party. Events in zone 7 differ from those in zone 8 only in that they do not reflect unilateral transfer. The same example can be adjusted to a zone 7 case by stipulating that both parties must agree to the transfer of control of the child's residence.

It is interesting to observe that the whole diagram takes place within the domain of purposive action (A). This illustrates Coleman's fundamental presupposition about the social world: that social outcomes result from purposive, intentional actions by individuals. If we imagined that religious rituals were purely performative, serving as expressions of inner spiritual experience -- we would find that these "social events" have no place in this diagram. Likewise, if we thought that there is an important role for emotion, solidarity, hatred, or love in the social world -- we would find that actions and phenomena involving these factors would have no place in the diagram.

It would be interesting to attempt to populate a more complex diagram with an initial structure something like this:


Would this modified scheme give a different orientation to the "sociological imaginary"? Might we imagine that the theories of important intersectional figures like Bourdieu, Tilly, or Foucault might fall in the intersection of all three circles? Would episodes of contentious politics involve actions that are purposive, emotive, and performative? Is there any reason (parsimony, perhaps) to attempt to reduce emotion and performance to a different kind of purpose? Or it is better to honestly recognize the diversity of kinds of action and motivation? My inclination is to think that Coleman's choice here reflects "rational choice fundamentalism" -- the idea that ultimately all human actions are driven by a calculation of consequences. And this assumption seems unjustified.



Wednesday, October 26, 2016

New structural economics


Does economic theory provide anything like a concrete set of reliable policies for creating sustained economic growth in a middle-income country? Some contemporary economists believe that it is possible to answer this question in the affirmative. However, I don't find this confidence justified.

One such economist is Justin Yifu Lin. Lin is a leading Chinese economist who served as chief economist to the World Bank in 2008-2012. So Lin has a deep level of knowledge of the experience of developing countries and their efforts to achieve sustained growth. He believes that the answer to the question posed above is "yes", and he lays out the central components of such a policy in a framework that he describes as the "new structural economics". His analysis is presented in New Structural Economics: A Framework for Rethinking Development and Policy. (The book is also available in PDF format from the World Bank directly; link.)

Lin's analysis is intended to be relevant for all low- and middle-income countries (e.g. Brazil, Nigeria, or Indonesia); but the primary application is China. So his question comes down to this: what steps does the Chinese state need to take to burst out of the "middle income trap" and bring per capita incomes in the country up to the level of high-income countries in the OECD?

So what are the core premises of Lin's analysis of sustainable economic growth? Two are most basic: the market should govern prices, and the state should make intelligent policies and investments that encourage the "right kind" of innovation in economic activity in the country. Here is an extended description of the core claims of the book:
Long-term sustainable and inclusive growth is the driving force for poverty reduction in developing countries, and for convergence with developed economies. The current global crisis, the most serious one since the Great Depression, calls for a rethinking of economic theories. It is therefore a good time for economists to reexamine development theories as well. This paper discusses the evolution of development thinking since the end of World War II and suggests a framework to enable developing countries to achieve sustainable growth, eliminate poverty, and narrow the income gap with the developed countries. The proposed framework, called a neoclassical approach to structure and change in the process of economic development, or new structural economics, is based on the following ideas:

First, an economy’s structure of factor endowments evolves from one level of development to another. Therefore, the industrial structure of a given economy will be different at different levels of development. Each industrial structure requires corresponding infrastructure (both tangible and intangible) to facilitate its operations and transactions.

Second, each level of economic development is a point along the continuum from a low-income agrarian economy to a high-income post-industrialized economy, not a dichotomy of two economic development levels (“poor” versus “rich” or “developing” versus “industrialized”). Industrial upgrading and infrastructure improvement targets in developing countries should not necessarily draw from those that exist in high-income countries.

Third, at each given level of development, the market is the basic mechanism for effective resource allocation. However, economic development as a dynamic process entails structural changes, involving industrial upgrading and corresponding improvements in “hard” (tangible) and “soft” (intangible) infrastructure at each level. Such upgrading and improvements require an inherent coordination, with large externalities to firms’ transaction costs and returns to capital investment. Thus, in addition to an effective market mechanism, the government should play an active role in facilitating structural changes. (14-15)
So a state needs to secure the conditions for well-functioning markets; and it needs to establish an industrial strategy that is guided by a careful empirical analysis of the country's comparative advantage in the global economic environment. In practice this seems to amount to the idea that the middle-income economy should identify the leading economies' declining industries and compete with those on the basis of labor costs and mid-level technology. Lin also emphasizes the important role of the state in making appropriate infrastructure investments to support the chosen industrial strategy. This is a "structural economic theory" because it is guided by the idea that a developing economy needs to incrementally achieve structural transformation from a given mix of agriculture, industry, and service to a successor mix, based on the resources held by the economy that give it advantage in a particular set of technologies and production techniques. Here is a representative statement:
Countries at different levels of development tend to have different economic structures due to differences in their endowments. Factor endowments for countries at the early levels of development are typically characterized by a relative scarcity of capital and relative abundance of labor or resources. Their production activities tend to be labor intensive or resource intensive (mostly in subsistence agriculture, animal husbandry, fishery, and the mining sector) and usually rely on conventional, mature technologies and produce “mature,” well-established products. Except for mining and plantations, their production has limited economies of scale. Their firm sizes are usually relatively small, with market transactions often informal, limited to local markets with familiar people. The hard and soft infrastructure required for facilitating that type of production and market transactions is limited and relatively simple and rudimentary. (22)
Some common development strategies fail to conform to these ideas. So, for example, import substitution is a bad basis for economic development, because it subverts the market and it distorts the investment strategies of the state and the private sector; it fails to guide the given economy on a path pursuing incremental comparative advantage (18).

What this analysis leaves out completely is the goal of economic development -- improving human wellbeing. Indeed, the word "wellbeing" does not even appear in the book. And certainly the perspective on development offered by Amartya Sen in his theory of capabilities and realizations is completely absent. This is unfortunate, because it means that the book fails to address the most important issue in development economics: what the fundamental good of economic development is, and how we can best approach that good. Sen's answer is that the fundamental good is to increase the wellbeing of the globe's total population; and he interprets that goal in terms of his idea of human flourishing. (Sen's theory of economic development is provided in many places, including Development as Freedom. Here is a recent statement by Sen, Stiglitz, and Fitoussi on why GDP and growth in GDP are inadequate ultimate measures of development success; Mismeasuring Our Lives: Why GDP Doesn't Add Up.) Sen's fundamental view is this: the most important goal that a state can have is to create policies that enhance the development of the human capabilities of its population. In particular, social resources should be deployed to enhance education, health, domicile, and personal security. In such an environment individuals can have the fullest satisfaction of their life goals; and they can be the most productive contributors to innovation and growth in their societies. Well-educated and healthy people are an essential component of economic success for a country. But significantly, Lin does not address these "quality of life" factors at all (another phrase that does not occur once in the book).

Even less does Lin's theory address the kinds of issues raised by "post-development" thinkers like Arturo Escobar in Encountering Development: The Making and Unmaking of the Third World. Escobar challenges some of the most basic assumptions of classical economic development theory, beginning with the idea that industry-lead structural transformation is the unique pathway to human flourishing in the less-developed world. Escobar's critique involves several ideas. First is the observation that economic development theory since 1945 has been Eurocentric and implicitly colonial, in that it depends upon exoticized representations of the industrialized North and the traditional agricultural South. Against this colonial representation of global development Escobar emphasizes the need for a more ethnographic and cultural understanding of development. Second, this Eurocentric view brings along with it some crucial distributive implications -- essentially, that the resources and labor of the developing world should continue to provide part of the surplus that supports the affluence of the North. Third, Escobar casts doubt on the value of development "experts" in the design of development strategies for poor countries in the South (46). Local knowledge is a crucial part of sound economic progress for countries like Nigeria, Brazil, or Indonesia; but the development profession seeks to replace local knowledge with expert opinion. So Escobar highlights local knowledge, the importance of culture, and the importance of self-determination in theory and policy as key ingredients of a sustainable plan for economic development in the countries of the post-colonial South.

Why do these alternative approaches to development theory matter? Why is the absence of a discussion of wellbeing, flourishing, or culture an important lacuna in New Structural Economics? Because it results in a view of economic development that lacks a compass. If we haven't given rigorous thought to what the goal of development is -- and Sen demonstrates that it is entirely possible to do that -- then we are guided only by a rote set of recommendations: increase productivity, increase efficiency, increase market penetration, increase per capita income. But the fact of substantial rise in economic inequalities through a growth process means that it is very possible that only a minority of citizens will be affected. And the fact that a typical family's income has risen by 50% may be less important than the availability of a nearby health clinic for their overall wellbeing. And both of these kinds of considerations seem to be relevant in the case of China. It is well documented that there has been a substantial increase in China's income (and wealth) inequalities in the past thirty years (link, link). And it is also reasonably clear that China's commitment to social security provisioning is far lower than that of OECD countries. So it is far from clear that China's recent history of growth has been proportionally successful in enhancing the quality of life and human flourishing of the mass of its population (link).

The unstated assumption is that countries that pursue these prescriptions -- "maintain efficient markets, adopt an industrial strategy that accurately tracks shifts in comparative advantage, support investment in appropriate infrastructure to reduce transaction costs" -- will have superior long-term growth in per capita income and will be better able to ensure enhancements in the quality of life of their citizens. But this is nothing more than naive confidence in "trickle-down" economics. It ignores completely the problem of the likelihood of rising economic inequalities, and it doesn't provide any detailed analysis of how quality of life and human flourishing are supposed to rise. Development economics without capabilities and wellbeing is inherently incomplete; worse, it is a bad guide to policy choices.