The Instability of Modernity

book_of_heavenly_teachings_1912_14762794572

 

The attack against modernity is  a cliche at this point. Even if once upon a time that critique was in the domain of reactionary intellectuals and apocalyptic sects, now it simply forms part of popular culture, embedded in  movies, music, and video-games. Recently, I opened a book by Ernesto Sabato, a surrealist writer of the 50s. The first paragraphs describe a world where humans have turned into cogs of the capitalist machinery and slaves of instrumental reason. Although this proposition may have sounded very profound and original for the 1950s reader – it made me close the book.  I didn’t stop reading because his viewpoint was incorrect or stupid, but simply because I would learn nothing from that book, for I have encountered that perspective throughout most of my life, through the internet, television, and contemporary thinkers.

However something that has been transformed into a permanent topic of conversation is trivialized, being converted into the unexamined chatter of “they”. The critique against technic and the enlightenment is not anymore a novel observation – not like it was in the first half of the 20th century, when it was first developed by Heidegger and the Frankfurt School. This position is simply a fossilized point within any superficially “anti-systemic” perspective, whether its from the far  right or left, or from a popular music band. Given the state of this critique, it is necessary to re-analyze its premises, since its ossified form has led to the obliteration of the objects of criticism: namely the triumphs of the Enlightenment. In the late 19th century and early 20th century, since the merits of the Enlightenment were considered beyond questioning, it was necessary to excavate the dark side of the technic, that logic of scientific domination that has transformed human labor and the Earth into an accumulation of commodities  Yet, once again, we must reconsider the successes of the Enlightenment in lieu of the dominance of anti-technic arguments.

 

To clarify the discussion I will briefly define what I mean by the technic. I do not refer to technology itself, given that it has existed in an artesanal form since before civilization.  What I mean instead is a scientific-technical totality (both social and “instrumental”) used to abstract the universe into discrete entities that can be studied in a fundamental manner, entities subject to universal laws. The technic also facilitates the manipulation of entities through science and coordination for utilitarian ends (e.g. engineering, logistic, etc). This totality incorporates technology, but it cannot simply be reduced to the neutral application of science. We can locate the origin of this conception of  the technic in the 17th century, with the Enlightenment and the thought of Newton and Descartes. The technic is not only used to understand and manipulate the natural world, but also society, through logistic, psychology, and coordination (e.g. marketing, industrial engineering, public administration).

I will not attack this anti-technic perspective in its totality, for I don’t think it’s completely flawed. Only a technocrat or white supremacist would have the guts to defend modernity like something completely positive. Modernity brought the holocaust, the atomic bomb,  and the rape and pillage of the americas. Information technology has given rise to a state of surveillance that would kill of jealousy the secret police of Stalin and Hitler. The rationalization of the natural world so that it can be exploited by logistics and technics is leading to global warming, a process that would not only kill hundreds of thousands due to hurricanes and heat-waves, but would come with incalculable socio-economic devastation.  There’s a reason why science fiction projects worlds of evil computers and ecological destruction – this recognition of the dark side of the technic is rooted in the marrow of western culture. Before the empirical evidence and sentiment of this era, it would be irresponsible to hide the crimes of our technical society. Finally, like Heidegger once argued, the technic has concealed that qualitative part of truth that is not quantifiable, such as the poetic dimension of a forest, or social structures that are invisible to calculation, but that still  scaffold the power differentials between classes, races, genders, etc.

Many socialists of more positivistic nature would argue that this critique isn’t about the scientific-technical society, but about capitalism. They say that technology and science are neutral, and that they can be used for pro-social ends as much as for destructive ends. Science could be used for the good – for the construction of a sustainable world, with automatization and cybernetics applied for the emancipation of society from toil, hunger, and in a distant future, for the liberation of humanity from the limits of an organic and mortal body.  But this viewpoint gives an ahistorical role to science that not even the old thinkers of the Enlightenment expounded. Science, as we understand it, is not simply a continuity that begins with the prehistoric origin of tools and human curiosity and ends in the present. Modern science emerged and evolved in combination with capitalist development. The technic as defined in the beginning of this essay, has only existed for a couple of centuries. In contrast to modernity, the technologies invented in more ancient epochs were not coupled with an all encompassing perspective that treats the universe like a machinery that can be manipulated for utilitarian ends, but simply emerged through trial and error. This conceptualization of the cosmos is linked with the abstraction of all social relationships, such as the transformation of peasants and artisans into an homogenous proletariat that can be subject to the coordination of a technical-logistic rationality. This rationality was described in the first chapter of “The Wealth of Nations” by Adam Smith. The destruction of the community and its organic unity and its replacement  by price signals and coordination was not simply a neutral process of abstract problem solving, but the creation of an efficient machinery destined for capital valorization.

However, the various tendencies of the technic are not simple and unidirectional. Although critics attack the technic for its homogenizing violence, and its subsumption of the particular under the universal through the force of abstraction, since the technic privileges the “scientific” narrative over others (e.g. religion), these critics are victims of their own “post-structuralist” abstractions. A more rigorous and charitable analysis of the technic would see it as an unstable, contradictory system. The power of scientific-technical abstraction  isn’t only used to convert humans and forests into piles of labor and lumber that can be dissected and manipulated. This tendency undoubtedly exists, and it represents a drive towards domination, but there are also emancipatory tendencies, both ideological and material. For example, the radical wing of Enlightenment, represented by the likes of Spinoza, considered the technic as an instrument for establishing a democratic and egalitarian society, a weapon against popes, kings, and lords.

Since the technic does not require divine revelation to be accessed – but simply uses the rational capacity of any human being – it becomes emancipatory. The physical laws of Newton and the geometry of Descartes, were discovered through  calculation, abstraction, and analysis, which are mental capacities universal in all human beings (Kant); these discoveries weren’t revealed through divine revelation, such as the content of religious texts and the divine rights of kings. If all humans have the capacity for calculation and reason, and if the optimal social order can be excavated by the technic, in the same way engineering can be used to create the most optimal machinery, then the consequences of this argument is that all humans, with their autonomous reason, can participate in the political and social administration of the social order.

This defense of the technic outlined previously was of an ideological nature. However, there is also a material defense of the technic that was originally outlined by Marx, but that was then confirmed empirically by the trajectory of western europe. The rationalization of the european peasantry into free laborers that are not attached to the land, dissolved the agricultural patriarchy. Before, the peasantry was constrained by the land, the youth were completely submitted to the power of the parents and the feudal lord. Specifically, the youth had to inherit the land from their parents, and for marrying they required a dowry that also came from the parents, furthermore the youth had to swear fealty to feudal lords.

This emancipation of labor from the land also brought the structures that scaffold gender rights in modern liberal democracies, which while imperfect, were an advancement in western europe. This decline of the lords’ power, based on the increasing concentration of ex-peasants in the cities and towns, and the emergence of industrial capital, also caused the transference of power from rural areas to urban centers, were workers, embedded in the industrial infrastructure, became indispensable to the circuits of capital since the fixed capital of industrialists would lose value without the manipulation of workers. Since workers were embedded in the logistical mesh of the economy,  they were able to acquire democratic rights since the workers turned indispensable (Endnotes). Furthermore, the historian Geoff Eley argued that the vigorous expansion of democratic rights at the second half of the 19th century and the beginning of the 20th century, was triggered by workers’ movements – movements that would have not emerged if the technic hadn’t transformed the peasantry into proletarians, since that rationalization integrated workers into the political and logistical meshes of the city.

All these tendencies, one in the direction of domination, and the other in the direction of liberty and democracy, do not converge in a common course but instead create instabilities. In physics, an instability means that a system can move in many directions, without the properties of the system revealing a favoritism for a particular trajectory. For example, in the case of a ball at the top of a perfectly symmetric hill, random perturbations like the wind can push the ball in any direction along three hundred thirty six degrees, with every trajectory equiprobable. In the technic the same instability exists. Some  directions point towards democracy, enlightenment, a world of leisure, health and education. Other tendencies of the technic point to opposite directions, such as the surveillance state, scientific racism, the atomization of all communities, and the extermination of all life.

All these trajectories of the technic aren’t simply a function of something external, in accordance to what the positivists say when they qualify the technic like something neutral, but actually emerged from the internal dynamics of the system- a tendency towards calculation, universalism, and abstraction. But this negative narrative about the technic, that drive that brought the extermination of six million jews through industrial-scientific means, and is bringing about the cooking of the Earth, while accurate, is only one tendency amongst others that emerge from an instability. The same instability  also brought the democratic rights of workers, the decline of child mortality, the haitian revolution, and the destruction of the agricultural patriarchy in europe, the latter a process that also brought gains in gender equality.

The technic has various potentialities, one that dominates and kills, and the other that illuminates and liberates. However, a society obsessed with the technic, such as modern capitalism, will always push that instability towards the trajectory of domination. Capitalism found a vehicle for its own manifestation in the technic, for a society organized by price signals will always tilt toward the violence of the calculation. The properties of the human being that cannot be abstracted into a number become unintelligible – such as social and psychological needs. The technocrat only sees GDP growth, and the boss can only calculate surplus value. This aspect of the technic that only sees in forests and human energy stores, was demonstrated by Heidegger, who argued that the technic obscures and blocks the other aspects of the truth that are not quantifiable. However, he forgot to add that this aspect of the technic is only one potentiality, that tendency embedded in capitalism, given that a society ruled by money, a quantitative substance, will only exploit a narrow calculus at the expense of other more holistic aspects of the technic that may have emancipatory qualities.

This deconstruction of the technic as a totalitarian force requires a socialist synthesis. Socialism is the descendant of Radical Enlightenment, that tendency toward a world where humanity uses reason to create a free and democratic society, where social needs are satisfied by the economic order. The highest manifestation of the socialist technic emerges in the planned economy, under a world workers’ republic. However, socialists also argue that not everything can be abstracted into numbers, for social and psychological needs are not entirely intelligible to calculus.

Marx had described this qualitative aspect of the technic, for the rationalization of the human being within a division of labor  dissects the body and mind, turning them into something automatic and alienated. Therefore, a socialist synthesis, while using the technic to plan a rational economy,  must also yield a specific magisterium to the more spiritual and qualitative aspects of the human being. For example, in capitalism, one of the main objectives of national policy is GDP growth. However, in a socialist society, growth of productivity and efficiency wouldn’t be a priority, for there would be other objectives related to the flourishing of human beings.  Many of these objectives cannot be subsumed into equations and rational dissection, but requires a space outside the technic. Socialism should therefore be a synthesis where the technic enhances other more qualitative modes of life, instead of just subsuming them under quantitative abstraction.

Advertisements

For a scientific economy (part I): planning, not the free market, made the West wealthy

pia21983_hires (1)

The economy is a complex system – a system pulsating with billions of agents that interact between themselves, sometimes along planetary scales. From these interactions, that at first glance appear random, emerge properties and laws subject to certain quasi-deterministic logics. These emergent laws were studied for the first time by the old political economists of the 18th and 19th century, like Smith, Ricardo, and Marx. One of these emergent laws is the existence of certain resources that are considered “free”, but are extracted without their value reflecting in price. For example, the domestic sphere, which contains house cleaning, child rearing, and cooking, is “free” since this labor is not remunerated and isn’t reflected in prices. In other words, the domestic sphere is invisible to the price system, even when this labor is necessary for the reproduction of the worker and therefore, for the reproduction of capital.

A similar phenomenon is reflected in the treatment of the environment by capital. Marx once argued that the the value of a commodity is a function of the labor quantity employed in creating it. If we use this idea, then the value of lumber, oil, and water simply reflects the labor spent in extracting, processing and transporting these resources. However, the natural resource in itself, without being processed by such labor, has no value. We could say then that these resources are “free” for capital – opened to absolute plunder provided there is access to enough labor power. Therefore, the domestic and environmental sphere are unintelligible to capital – this unintelligibility is a law that emerges from random processes of billions agents – such as firms, workers, consumers, etc., without those agents being conscious that their coupled actions give rise to these laws. This was Hegel’s observation, who referred to this emergence as the “cunning of reason”, since even though the passions of different human beings are varied and contradictory,  they somehow end up coalescing into an intelligible motion of history.

This problem, of the invisibility of the earth system before capital, has become one of the most important political-economic problems of the last couple of decades, since this unintelligibility  of the environment before price signals has pushed the planet to the brink of devastation. In its spontaneous state, the price does not reflect the ecological damage that the extraction and use of certain natural resources infringe on the planet. In conventional economics, this damage is referred as “externality” since its information is not reflected in the price. Economics usually recommend that these “externalities” are transformed into internal variables, through the intervention of the state. Particularly, these conventional economists recognize the existence of emergent phenomena springing from capital that are destructive, and require public planning so that the ecological damage becomes intelligible to the capitalist system. Yet this intelligibility is applied in an external manner, it does not emerge spontaneously from the market, but from the conscious planning  of the state, through the volition of scientists, politicians and judges.

This need of gubernamental intervention to tackle the problem of climate change shows something very important: the emergent laws of the market not only cannot solve the problems of existential dimension facing the human race, but these same emergent laws often enlarge the problem, for example, the tendency of capital to perceive the environment as a storage of free resources that can be ransacked. This observation is more profound than the idea that capitalism is guilty of all the problems of modernity, which is a proposition that often is pronounced in a reflexive manner, without much thought. The principal problem of capitalism is that it is not designed for the direct resolution of problems – capitalism is a system that, as a first approximation, emerges from the random interactions of billions of consumers, firms, and workers, without the behavior of the system being the result of someone’s volition. Specifically, emergent phenomena and laws, that originate in the random behavior of billions, control the destiny of the system, laws that will not necessarily result in favorable consequences, and that in reality, can impact society in a destructive way. Not only can these laws be destructive, but simply there is no human will that legislated them, instead, they emerged spontaneously from aleatory processes.

The incapacity of the market to resolve the fundamental problems of society is well known by the state’s functionaries, and this is the reason why much of the infrastructure required for society uses public planning. Some examples are roads, railways, medical services, the police, and education.  Many markets owe their existence to technology and methods developed by the state, since only the state is capable of absorbing the large financial risk and has the will to spend the necessary capital to springboard certain industries. For example, the oil industry in Alberta, Canada could emerge thanks to Peter Lougheed’s government investing in the required  technology and research so that the oil industry is profitable, industry that today is the most important of that province. This fact is interesting since it contradicts the usual “right wing” attitude of Alberta, one of the most conservative provinces in Canada.

If at first glance, planning appears so superior to the market, why is then the free market the economic orthodoxy? In spite of the existence of a class system, why did capitalism enrich certain countries such as Great Britain, Netherlands, and the United States? The defendants of capitalism, even if they admit it is not a perfect system,  can point at certain advantages the market brought. In the core economies,  during  the twentieth century there was an exponential jump in the quality of life for even the poorest individuals. It can also be said that the countries that “invented” capitalism, such as Great Britain, United States, Netherlands, and Germany are  some of the wealthiest countries in the world. The socialist bloc, even if it brought immense material gains to the population, raising millions from poverty, could never surpass the capitalist west in production and riches. Not only did the socialist bloc fell, encouraging all reactionaries to proclaim the supremacy of the market, but some of the “socialist” countries are materially fairly miserable, such as North Korea or Venezuela. Finally, many see China as proof of the supremacy of the market, since this country’s economy is rapidly growing after they opened up to the global market. All these facts are frequently used to justify the greatness of the market and attack the socialist project.

Without doubt there are countries that adopted the doctrine of the free market and keep being miserable, such as the majority of peripheral countries. Yet, the arguments of neoliberals must be dissected – we must find that secret of capitalism that brought it to the resolution of certain technological and social problems, which gave the advantage to the so called “first world” countries. This problem is important since as we said, the atom of capitalism is the aleatory interaction between agents, such as firms, consumers, etc, not a unified and conscious problem-solving will. However, even when there is no “plan”, capitalism invented the diesel engine, modern medicine, made of certain countries immensely rich, and lead to longer life expectancies. We must discover the reason why the Smith’s “invisible hand”, could resolve certain problems that benefited so many people.

One of the developments that brought capitalism to its almost total triumph is instrumental reason. We will define instrumental reason as that technic that uses scientific rationality for some end. For example, instrumental reason can be applied to public health, using research, medicine and logistics to neutralise  some epidemic. The end of instrumental reason is axiomatic: in the case of capitalism it’s often profitability, where the rationality of marketing, engineering, and logistics is applied. Many philosophers and sociologists will argue that instrumental reason emerged with capitalism. But there is a contradiction here: we have mentioned that the phenomena of capitalism are not legislated by conscious volitions, but emerge spontaneously from the stochastic and granular interactions of agents. Yet, at the same time, we have argued that capitalism emerged coupled to instrumental reason, that scientific rationality that must be applied by a conscious will to accomplish some intelligible end.  Furthermore, with capitalism emerged bureaucratic rationality, something Weber was obsessed about. How can a system that apparently emerges from disordered interactions give light to to a rationality that requires conscious human volition?

The answer is that Instrumental reason wasn’t the result of conscious wills that sat in a table and discussed its creation, but emerged spontaneously from class struggle. This was the thesis of Robert Brenner, marxist historian. In the sixteenth century there was an agricultural revolution in the British countryside, where scientific and technical methods were applied to reorganize production, such as animal husbandry, or the centralization of infrastructure to create a more efficient production chain. In order to reconfigure production in a more scientific and efficient manner, it was necessary that the lords stripped the peasantry of their lands and converted them into waged workers.  In consequence, it became possible for the lords to centralize and manipulate the land through scientific reason, investing in infrastructure and training the peasantry with more advanced skills required for a more scientific agriculture and animal husbandry. These advances were motivated by the growing absorption of lords under the logic of the commodity, where it was necessary to produce crops in a more quick, efficient, and cheap manner, in order to be able to survive the brutal world market.

Brenner argued that this agricultural revolution did not appear in the other regions of Europe, such as France, or Poland, since in the case of France, the peasantry had such a high level of organization and autonomy that it was impossible for the lords to break them and atomize them into waged workers. In the case of Eastern Europe, the lords were so powerful and the peasantry so dominated that the creation of a waged class was unnecessary. In this region, the lords dealt with the growing mercantile competition by simply hyper exploiting a peasantry that did not have the means to defend itself. Therefore, it was the specific balance of class forces that brought the agricultural revolution that converted Great Britain into a world power. We could say then that Great Britain invented instrumental reason as used today – where the unquestionable end was profitability, and the rational means were the technical reorganization of the countryside.

If we take Brenner’s thesis seriously, then the economic supremacy of certain capitalist countries was formed by the technical reorganization of their societies for the end of creating commodities more cheaply and efficiently.  Yet for Brenner, this technical jump does not appear spontaneously out of the existence of commercial relations, such as the process of buying and selling commodities, but only emerges if the configuration of classes permits it, and only in specific industries. From Brenner’s observation, we can perceive the weaknesses of the pro-capitalism arguments of the orthodoxy: the source of wealth of specific capitalist countries was subject to certain exogenous restrictions: in the case of Great Britain it was the balance of class forces between lords and peasants. This thesis is incompatible with Smith’s theory, who saw technical reorganization as an endogenous process of the market.

Once certain capitalist sectors developed instrumental reason, then the State began to wield this process, and applied it for other more macroscopic ends. The foundations of this state were established in the french revolution at the end of the eighteenth century, where secular reason replaced the arbitrary deliberations of the nobility with the will of the people, where human beings, by their free will, legislate laws that emerge from rational discourse.  This was was Hegel’s fevered dream, where Reason, the essence of the true God, manifests itself in the kingdom of this world as a secular and rational State. In this new order, the proto-capitalists converted the small landholdings of the peasantry into proto-factories, centralized under the sun of reason, and broke communities into atomized proletarians that can be integrated into a logistical mesh.

Once agricultural capitalists unleashed instrumental reason into society, the state began to apply it to more ambitious ends, since this reason as wielded by capitalists was constrained and distorted by the small scales in space and time of profitability. First, the french revolution applied this reason in a political manner, where this rationality was used to deal the final blow to the lords and kings. But then, the state began to use science and the technic for concrete ends, developing large public projects that the private sector could never develop by itself, such as railways, electric grids, and public plumbing, coordinating thousands of workers across the continent. The socialists of the second international, who marvelled before public planning,  demonstrating the irrationality of capitalism – for it was absurd to let private companies take advantage of public infrastructure and coordination, necessary ingredients for the survival of capital. It was simpler to submit the private sphere in its totality to public planning through the socialization of firms.

However, the incapacity of capitalism of resolving certain social problems, coupled with the flourishing of instrumental reason in early capitalism, demonstrate that capitalism’s technical triumph wasn’t necessarily a function of the free market, but instead, of the class struggle, where the lords destroyed the old forms of life of the peasantry, and replaced them with the atomized worker. Once the proletarian is unchained from their relationship to the countryside and the patriarchal household, then instrumental reason can integrate them to a scientific and coordinating logic. This scientific logic that enriched the countries that developed capitalism first, was not the market. Instrumental reason only emerged when the peasantry was stripped of their old forms of life rooted in soil and honor, and their labor power centralized under larger scales to produce commodities more efficiently. In other words, the technical supremacy of “first world” countries was unleashed by processes that are exogenous to the market, such as the class struggle between lords and peasants.

Therefore, those countries that couldn’t destroy the processes that root the individual into a spatial coordinate, did not transcend their material backwardness when they adopted capitalism. This was because traditions and communal constraints did not let capitalists and politicians abstract completely human beings from their spatial roots and convert them into an atomized mass that can be manipulated by scientific coordination.

A skeptic could use the experience of the old socialist bloc to protest against my argument, since in these countries, supposedly scientific reason was applied through planning. Yet, these states were never able tos surpass in wealth and technical prowess the capitalist west, and their foundations were so unstable that many states could barely survive through the duration of a human life. Moreover, some of these supposedly planned economies remained materially miserable, such as North Korea. However, this argument is defective, for at this point in time, all national economies are embedded into a world capitalist system, where notwithstanding the alleged existence of scientific planning, the law of value that operates internationally would starve any country refusing the market. All states are embedded in an international division of labor, and ultimately, the computers, vaccines, and engines required to build a hospital can only be imported from rich countries through dollar transactions.

The existence of this antagonistic world order does not mean that these socialist societies did not have endogenous dysfunctions, such as a lack of democratic rights, corruption, and bureaucratic hierarchies. However, the political problems of these countries, including their authoritarianism and corruption, were caused in part by the aggression of the political-economic world order.  For example, the dysfunctions of the USSR, such as stalinism, emerged in part as a reaction to an aggressive west, since the USSR had to industrialize in a fast paced, disjointed manner in order to create a war machinery that can be used to defend themselves against the “capitalist” west. Furthermore, In many of these socialist countries, forms of life endured that predated modernity, sabotaging instrumental reason, since those pre-modern structures enabled the survival of the patron and the client at the expense of the collective. It’s very probable that the alleged inefficiencies and inflexibilities of planning are not inherent, but related to the  the antagonistic nature of the capitalist world order, and the antiquated forms of lives embedded in many of these countries. Wealthy capitalist countries did not necessarily enrich themselves only through the market, but through the destruction of the peasantry and the application of instrumental reason – in other words, through planning.

We conclude the first part of this essay. The second part will deal with the necessity of socialist planning to combat climate change.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

 

The Rise of the Right Wing Is Not Due to the Working Class Because Workers Don’t Vote

juan-cavalleiro-241522

A common and mistaken assumption among radicals is that right wing parties win because of ideological trickery and lies. In other words, that the electorate does not understand their own class interests, and are bamboozled by smooth talking politicians.   For example, a popular idea about american politics is that poor whites tend to vote against their own interests, as there is the preconception that they are part of the electorate of Trump and the GOP. Just recently in  Ontario, Doug Ford, a millionaire, won the  provincial elections under a very vague platform that included lowering taxes and “anti-elitist” rhetoric, very similar to Trump’s “drain the swamp” antics. Some pointed out the contradiction of the wealthy Ford running under an anti-elitist platform – seeing it as a form of ideological articulation and nothing else.

In general, there’s been a rise of the right wing in elections across the the  developed West. A couple of high profile examples are Trump, Brexit, and the recent German elections. Furthermore, fascistoid parties that took power recently in some european countries, like Hungary and Poland.  Superficially it may seem that these parties are the “will of the people”, since they won by an algorithmic majority in democratic countries. For leftists this may seem hopeless, as it could be interpreted that we lost the ideological battle, and that much of the Left’s traditional demographic (e.g. workers) have fallen into reaction.

I find that these sentiments begin with the wrong (and liberal) idea, that the body of citizens are an amorphous, classless set of individuals that must be “won over”  so that they do not turn right wing. Another iteration of the same argument is that many voters are going against their “own interest” by voting for the right wing. For example, the common archetype of the poor rural white that votes Republican.

The worst aspects of these assumptions are in the mainstream of the Left, especially social democratic parties and “center left” parties.  Since the electorate at first glance seems to swing conservative, many social-democratic parties have swung to the right to win back some of that electorate. An interesting example is the rise of the  center-left NDP (New Democratic Party) in Alberta, one of the most conservative provinces of Canada.  Many of the militants in the federal NDP are against the construction of new oil pipelines, for fiscal and environmental reasons; yet the Albertan NDP has taken a pro-big oil stance in order to appease the seemingly conservative Albertan electorate. I am sure that the shift towards austerity politics of many of the mainstream social democratic parties is also related to the tailing of a supposedly conservative electorate.

However, once we start looking with nuance at hard data, rather than simply taking a phenomenological algorithmic majority for granted, we will find that the rise of the right wing isn’t really just a matter of false consciousness or ideology, but has a real class basis. In other words,  today’s electoral choices fully emerge from the class interests of much of the voting base. This is simply because many of whom fit the marxist definition of proletarian, that is someone that owns nothing except their own labor, are not voting. It’s well known that lower income makes it more likely that someone will not vote.    In fact, there is a correlation with income inequality and low political participation.

Another interesting trend is that voter turnout in the developed world is steadily declining. This correlates with the increase of income inequality, the rise of the right wing, and yes, the decline of the Left.

Let us look at the United States as a particularly dire but interesting example. The reason why voters choose politicians that want to cut social programs and enforce austerity, is that the same politicians often promise more tax cuts, a restructuring that would benefit people from higher tax brackets, who happen to be the people that vote.  Surveys have found that nearly half of non voters in the US make less than 30k in income. If you zoom into the lifestyle of a large percentage of 60k+ households – a life that may include mortgages, workplace insurance, fat credit lines, and segregated neighborhoods were race and income cut along zip code lines, voting patterns make sense.  I imagine that the last person that would benefit from rent control, centralized school funding, and welfare is going be an office manager that holds home equity and sends their kid to piano lessons.

These voting patterns are also interesting from a political economy perspective.  Much of what passes as class analysis in the more popular iterations of marxism usually only looks at workplace relations, and whether someone collects a salary as opposed to being a capitalist. Yet, one of the ways the liberal democratic state culled  working class militancy is through the introduction of cheap credit, which suddenly made much of the traditional working class into “property owners”, because they now own house equity. Specifically, the skilled layer of the working class  and professionals became petit-bourgeosified (synonimous to small land holders). In other words, this middle class, even if some of them collect a salary, stop being proletarian in the marxist sense (a class that owns nothing except their labor power) and turn into small property owners.   In the american case, this was also related to racial dynamics, where a white middle class entrenched itself in segregated zip-codes, with housing associations that monitor the evenness of lawns in order to mantain property values. Furthermore, zoning privilieges are also a way of gatekeeping resources for  their childrens’ social mobility – for example, through public schools that are only attended by rich people

The existence of a petit-bourgeosified middle class and upper middle class, who are isomorphic to small land holders, can only manifest in the era of finance capital, as their lifestyle is sustained by fragile debt that leads to financial fragility and secular decline.  According to Minsky (who has been recently adapted to macroeconomic models), financial fragility emerges from banks and other financial institutions lending too much money in boom periods, which inevitably leads to financed enterprises that fail to be profitable. This generates a bubble  that later on bursts, creating business cycles and dislocation between financial sector and the real economy. Furthermore, as mentioned in my previous post, the financialization of capital correlates with the decline of productivity across virtually almost all industries, so only finance capital instead of the “real economy” can sustain these small proprietors. So it is no surprise that there is almost a clientelistic link between these small proprietor, middle class whites  and the most reactionary elements of capital, as the latter buys them off by giving them racialized financial leverage that is not available to poorer, racialized sectors.

No wonder why left wing  tendencies and social democratic parties have declined, and the ones that survived, shifted rightwards. For  they all aim to convince “likely voters” who tend to be  petit-bourgeosified middle classes whose class interests are aligned with tax cuts and fiscal austerity, in contrast to lower income individuals that do not vote as much, and who would benefit from wealth redistribution programs. 

Instead of aiming for likely voters, leftists should create a genuine socialist party that fights for the working class and the poor.  The key for socialist hegemony is politically activating unlikely voters, e.g. racialized, working class and poor individuals, rather than trying to pull the heart strings of a middle class. This strategy will not yield  easy wins in the ballot box, for the likely voters tend to be conservative. Instead it would require a strategy in the long run where socialist hegemony is created amongst unlikely, low income voters. 

A minimum program for a party of the working class and the poor could contain some of the following policies:  (i) nationalization of real estate (except the infrastructure built upon it), (ii) a job retraining program for the casualized, unemployed, or low wage workers, (iii) a robust public healthcare infrastructure, (iv) abolition of temporary “work visas” and instead full citizenship for all immigrants, and (v), restructuring of educational infrastructure so that funding depends on “head count” rather than zip codes, including free upper education and student stipends. These positions are only some tentative examples, and this minimum program should go hand in hand with the long term maximum program of a world workers’ republic, and the replacement of market mechanisms with world economic planning. 

Much of the platform of a workers’  party will be opposed by the small proprietor middle class, since it is diametrically opposite to their interests  – for example, real estate nationalization is in contradiction with home ownership.  However, the large underclass that does not vote, and the segment of the working class that does go to the polls, should be able to be won over by a program that considers their immediate class interests.

The outlook for a workers’ party is moderately optimistic. As the pauperization of millennials, who are poorer than their parents, and the recent financial crisis have shown, the lifestyle of middle class small proprietors inflated by financial debt is unsustainable.   Therefore,  the base for a future workers’ party is secularly increasing.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

 

Crisis Theory: The Decline of Capitalism As The Growth of Expensive and Fragile Complexity

It’s an empirical fact that the economy experiences business cycles, in other words, oscillations between booms and busts.  Furthermore, many argue that the economy is experiencing a secular decline. For example, productivity across all industries has decreased since the 1970s. What ares the mechanisms behind these instabilities and also decline?  What would an accurate theory of economic crisis look like? 

Screen Shot 2018-06-29 at 7.55.01 PMSource: https://www.brookings.edu/wp-content/uploads/2016/09/wp22_baily-montalbano_final4.pdf

I believe that capitalism is both unstable and vulnerable to business cycles, and is also experiencing secular decline. The source of these trends are  feedback mechanisms that are structural to capitalism that encourage the growth of fragile and expensive complexity (logistics, rent-seeking, finance, etc) due to the pursuit of short-term profits.  Furthermore, this complexity becomes increasingly separated from the human labor (see Marx on labor theory of value) that creates wealth indirectly or directly (e.g. the factory worker, the doctor, the teacher, etc.),  which means a larger ratio of overhead versus wealth creation. The growth of expensive complexity in the long run means both declining productivity and fragility to the business cycle.

I will first review some of the theories that already exist to explain this secular decline and also the nature of business cycles. Then I will present my own crisis theory that addresses the weaknesses of the other existing models.

The mainstream economic approach to the business cycle is  modelled through the so called Dynamic Stochastic General Equilibrium model (DGSE).  In this model, mainstream economists assume the world economy is more or less in equilibrium (e.g. markets clear, and agents’ utility functions maximize) until a random shock appears, for example, a sudden rise in oil prices.  The nature and the source of the shock are irrelevant in this model – the DGSE approach only dictates that random shocks are an economic reality. Thus the task of the economist reduces to  studying how the structures of the economy amplify/dampen and propagate the shock.  For example, after the 2008 crash, economists began taking seriously how aspects of the financial sector may amplify these shocks (they call these financial frictions). It appears mainstream economists only have achieved a consensus on business cycle modelling but not necessarily on the secular decline of the economy.

Hyman Minsky was an important heterodox thinker that elaborated a crisis theory, and also recently became widely cited because of the 2008 crash.  Minsky  argued that crisis emerges from endogenous activities in the financial sector. Minsky explained that in booming times, banks and other financial institutions become “euphoric” and begin lending and also borrowing  large quantities that in bust periods they would find too risky.  Given that these financial actors   are overconfident, a speculative investment bubble develops. At some point, the debtors cannot pay back, and the bubble bursts, creating a crisis.

The more orthodox of the Marxist approaches to crisis  is  referred famously  as the theory of the tendency of  the rate of profit to fall (TRPF). According to Marx, capitalism experiences a secular decline in the rate of profits as work is automated away by machines, and therefore less workers are employed, which means less human labor to exploit. As production becomes more optimized, machinery  and raw material absorb more of the costs of production, and less workers are employed due to rising productivity. In marxist analysis, profit comes from exploitation of workers, that is, from paying workers less than the value created by the hours they worked. So in marxist analysis, as machinery automates more of the labor, the rate of profit also declines.  According to Marx, in a hypothetical scenario where all labor becomes automated by robots, the capitalist wouldn’t profit at all!

Finally there are some crisis theories were more heterodox marxist models and pseudo-keynesian theories converge.  Thomas Palley recently compared Foster’s Social Structure Accumulation theory (SSA) to his own theory, Structural Keynesianism. Both Palley and Foster argue the decline of economic growth is related to a stagnation of wages. If wages are stagnant, the aggregate demand necessary for growth is unmet, because workers don’t make enough to purchase commodities.  They argue that this  economic stagnation is related to the neoliberal growth model adopted since the 1970s. According to Palley, the  only mechanisms that kept the economy from crashing were the overvaluation of assets, and firms filling the hole in aggregate demand by taking on more debt. However this excess of credit lead to financial instabilities that eventually  crashed the economy  in 2008.

In my opinion all these approaches are flawed. For one, the mainstream approach under-theorizes the sources of fragility and the secular decline in the rate of profit. It is true that much of the crises/business cycles have to do with the fragility of the capitalist economy to volatility,  which is explained by mainstream models.  However, an important part of the story is why the capitalist system is fragile to these shocks. In fact, mainstream economists showed their ignorance with their inability to forecast the effects of the 2008 recession.  After the crash,  mainstream economists implicitly conceded to the heterodox arguments of Minsky that the financial sector creates fragility. For example, only after  the crisis did mainstream economists include in their DGSE models the financial instabilities mentioned by Minsky.  Furthermore, it appears mainstream economics doesn’t really have a theoretical consensus on the secular decline of capitalism.

The problem with the Minskyan approach is that it is severely limited – for one, it only identifies one source of fragility, which is the financial sector. It also does not theorize why the financial sector is “less real” than for example, the manufacturing sector – which Minsky implicitly assumes when he blames fragility only to the financial part.  Because of Minsky’s limited theorization, he also fails to explain the secular decline of the rate of profit, content with only explaining the business cycle. 

The greatest flaw of the  “orthodox” Marxist approach is its dependence on pseudo-aristotelian arguments. The TRPF model  is based in a logical relation between very specific variables, which are the costs of raw materials and machinery (constant capital), the costs of human labor (variable capital), and the value extracted from the exploitation of human labor (surplus value). This spurious precision and logicality is unwarranted, as the capitalist system is too complex and stochastic  be able to describe the behaviour of crisis as related to a couple of logical propositions. One has to take into account  the existence of instabilities and shocks, as the mainstream economists do. However, Marx still had a key insight which is that the aggregate wealth of the world must be sourced in human labor that produces use values. The source of wealth comes from dentists doing dentistry, and construction workers doing  construction work, not from the dentist trying to make money by trading in the stock market. Furthermore, Marx identified that there is a secular trend in the declining rate of profit, which is missing in other contemporary accounts.

Finally,  Palley’s approach seems to be too politically motivated. To them, the stagnation of the economy is related to issues of policy – of statesmen adopting the “wrong” set of regulations/deregulations. If politicians were just “objective”, and followed Palley’s set of ideas, then crisis and decline could be averted! To  Palley, the neoliberal phase was a matter of certain “top-down” policies rather than endogenous/spontaneous fragilities and instabilities that are inherent to the capitalist system. In my opinion, it’s impossible to disaggregate what is political and what is inherently structural in the secular decline of capitalism, since the whole world economy is more or less neoliberalized at this moment so there is no alternative to compare it at the present. So it seems to me that it’s a just-so story that is projected from the present to the past and impossible to prove empirically. 

One of the issues I have with the “left” theories of crises, such as Keynesian and Marxism, is that they don’t take instability, uncertainty, stochasticity, and complexity seriously. Instead, proofs and discussions are reduced to aristotelian logical chopping related to a few variables. In the Keynesian case, it’s aggregate demand, in the Marxist case these variables are surplus value, constant capital, and variable capital.  A system that pulsates with tens of billions of people is reduced to the logical chopping of a few variables. Instead, we must device a more holistic view of the capitalist world-system, taking into account its nonlinearities and fragilities.

The theories outlined above contain  parts of the truth, so we can use some of these elements to synthesize a model of crisis that contains the following: (i) economic fragility to instabilities and shocks,  (ii) endogenous sources of this fragility, (iii) a theory of the secular decline of the rate of profit. The concepts ultimately uniting these three points are fragility/nonlinearities and increasingly expensive complexity. For example, Minsky, by addressing the fragility in the financial sector, also implicitly points to a theory of  degenerative complexity, where the financial sector acts as a complex, expensive, and fragile  overhead that exists over the “real economy”.

We can use Taleb’s definition of fragility to make the concept more precise. Taleb mathematically defines fragility as harmful, exponential sensitivity to volatility. For example,  a coffee cup can withstand stress up to a certain threshold, above that, the cup becomes exponentially vulnerable to harm, as any stress higher than that threshold will simply shatter the cup. The reason why fragility is a nonlinear property is that the cup won’t wear  and tear proportionally to stress. Instead the cup will sustain the stress until a certain  threshold is reached, and then suddenly shatter. So in other words, the cup reacts exponentially to stress, with stress below a certain threshold inflicting negligible damage. 

Similarly, the capitalist world system probably has many thresholds, many of them currently unknown. This is because the capitalist world system is complex and nonlinear.  It is complex because it is made of various interlocking parts (firms, individuals, governments, etc.) that form causal chains that connect across planetary scales. It is nonlinear because the behaviour of the system is not simply the “sum” of the interlocking parts, as the parts depend on each other. Therefore one cannot really study the individual components in isolation and then understand the whole system by adding these components. In other words, interdependence  of the units within capitalism makes the system nonlinear. Furthermore, nonlinear systems are frequently very sensitive to change in its variables, where surpassing certain thresholds can make the system exhibit abrupt changes and discontinuities that often manifest as crisis.  This abrupt changes caused by the crossing of certain threshold is a common mathematical property in nonlinear systems.  Fragility therefore correlates with nonlinearities, abrupt jumps/shocks, and complexity. 

However it is not enough to say that the capitalist world system is fragile because it is nonlinear. The point is that the capitalist world system structurally generates feedback loops that lead to the accelerated creation of endogenous fragilities.  The frenetic pursuit of short-term profits in increasingly competitive contexts leads to the creation of fragile, nonlinear complexity. This is because a firm must invest in more expensive research, infrastructure, and qualified personnel to generate innovation that leads  profit in the short term, as many of the “low hanging fruits” have already been  plucked. So capitalism leads to a random “tinkering”  by firms and institutions to produce profit, by often adding ad-hoc complexity. This complexity make generate short-term profits, but is expensive in the long term.   Joseph Tainter tries to measure the productivity of innovation by looking at how many resources go into creating a patent. For example,  here is a plot showing how ratio of patent per GDP  and per R&D expenses has declined since the 70s:

 

mfig008Source: https://voxeu.org/article/what-optimal-leverage-bank

Another marker of increased expensive complexity is  how many people are required to create a patent:

mfig009Source: https://onlinelibrary.wiley.com/doi/full/10.1002/sres.1057

A very common and studied example  of this nonlinear complexity is the financial system.  The financial system is an example of growth of complexity in order to aid the profit motive.   Cash flows are generally too slow and cash reserves too low in order to cover the capital required to start firms, or to add a layer of complexity required for more profitability, so agents must resort to acquiring credit and loans. In other words the financial system acts as a fast, short-timescale distributive mechanism for the funnelling of resources to banks, firms and individuals that require quick access to capital in spite of low cash flows.   Without the financial system growth would be much lower because access to capital could only be facilitated through cash flows. However, as Minsky noted decades ago and mainstream economics emphasizes now,  the financial system is extremely unstable, complex and nonlinear, and therefore fragile. Here is a figure that shows for UK banks how much the “leverage ratio”, which is roughly the ratio between debt to equity of banks, has exponentially grown from the 1880s to the 2000s – in other words, banks depend on loans/credit in order to have fast access to capital.

MilesFig1 (1)

Source: https://voxeu.org/article/what-optimal-leverage-bank

The addition of complex overhead as inversely proportional to growth has been empirically verified for various parts of capitalism. Here are some examples: the cost diseases associated with industries like education and healthcare, the admin bloat in education and healthcare, the  stagnation of productivity across virtually all industries including manufacturing, the stagnation of scientific productivity in spite of exponential growth in the number of scientists and fields, etc.

Furthermore, capitalism encourages rent-seeking and expensive complexity, even if there are no benefits in wealth production for the economy in general. For example, this rent-seeking scenario is probably the case for admin bloat at the universities.  In the case of this admin bloat, there is a transfer of wealth from society to certain sectors of the university, but there is no obvious economic benefit for society in general. This is in contrast to traditional, profitable industries were profit leads to capital valorization through the reinvestment of that profit.

pnhp-long-setweisbartversion-52-638

As noted in a previous post, there is also a secular degeneration of science with the secular decline of capitalism. To summarize that post, as informational complexity grows at a faster rate than empirical validation and knowledge production, an informational bloat of unverified scientific theories gets created. An obvious example is the complex bloat of theoretical physics models that predict all sorts of new particles, in spite of the fact that the Large Hadron Collider, a multibillion dollar experiment, failed to confirm any of them. So you have a whole layer of professionals that are just experts in unverified/degenerative theories, and these professionals collect large salaries in spite of not contributing to economic nor epistemic growth.  Another example of a degenerative profession is economics. Judging from the stagnating productivity across most industries, we can probably assume that these caste of degenerative professionals is rampant across all corners of capitalism. This caste of degenerative professionals and “degenerative” experiments add expensive and fragile complexity to capitalism.

F1436560-42DD-4C21-BB80930F45E22220Source: https://blogs.scientificamerican.com/cross-check/is-science-hitting-a-wall-part-1/

Finally, as complexity grows, there is an increasing dislocation between abstracted logistical, degenerative, and “scientific” complexity and the human labor that creates the wealth.  A very good example is finance. To paraphrase and elaborate on what Taleb said, the wealth of the world is created by dentists doing dentistry, and construction workers doing “construction work”, not by the dentist trying to become rich by trading their savings in the financial market. This is where Marx becomes relevant – for the wealth of society comes from human labor, not from the transfer of wealth through administrative and accounting tricks, or through the circulation of financial instruments. This bloated complexity is required for the functioning of capital  because of financial, accounting, and logistical constraints.  Much of this complexity acts as an overhead for the world-economy that is required for the survival of capital itself, but this complexity does not necessarily create socially necessary wealth. An example of the fragility of this separation between wealth creation and complex abstraction is the existence of speculative bubbles.  Due to the overconfidence of the financial industry, assets are often overvalued and at some point their value collapses, as the dislocation between the real and financial economy becomes unsustainable. This financial instability was discovered by Minsky and that now is understood by mainstream economists, who incorporate it in their models.

Here we begin to sketch a theory for the secular decline of capitalism.  First there is a secular increase of fragile, nonlinear complexity driven by ad-hoc tinkering of firms/institutions in order to pursue short term profits at the expense of fragility. Furthermore much of this  expensive complexity is due to rent-seeking, where specialists trained in degenerative methods that add no obvious knowledge/efficiency self-reproduce and multiply, like string theorists, economists, university admins, healthcare admins etc. In the long run, all this added complexity that is created for short term profits becomes increasingly expensive, leading to even slower productivity growth  (GDP growth per labor hour).  Part of the lowering of productivity is the increasing dislocation between human labor that produces wealth and an abstracted layer of researchers, administrators, managers, etc. Furthermore not only there is a secular decline of the economy, but there are also increasing fragilities and instabilities, as the bloated complexity is very nonlinear, given that it couples agents across planetary scales, such as how the financial industry transcends national economies. So the world economy becomes increasingly more vulnerable to shocks, due to nonlinearities (caused by interdependencies) that lead to  abrupt changes. These instabilities and fragilities give rise to the so called business cycle.

In conclusion, a socialist theory of crisis should begin by looking at the economy as a whole, taking into account its instabilities and fragilities. In my opinion, the methodologies of the various Keynesian and Marxist schools are wrong because they pretend to have identified a couple of important variables (e.g. aggregate demand, organic composition of capital) and then logically derive a theory of crisis through these variables. However, because the economic system is extremely complex and nonlinear, these theories probably amount to just-so stories, since the mechanisms behind the instabilities in capitalism are probably very varied (and many of them unknown),  and therefore  cannot be pinpointed to just specific sources. Instead, a better approach to a crisis theory is  to analyze how capitalism creates  endogenous feedback loops that lead to fragility, due to generalized and socially unnecessary nonlinearities and complexites. This nonlinearization and complexification is imposed in order to pursue short term profits, at the expense of long-term productivity. Moreover, another important issue is how a large part of this complexity becomes increasingly dislocated from wealth creating labor – such as the dislocation between administrators and professors, or the financial sector and the real economy.  

I am confident many of the theories presented in this article can be both quantified and verified against empirical data in a much more rigorous way than done here. But alas, there isn’t an eccentric millionaire backing this research program😞.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Ergodicity as the solution for the decline of science

608px-Maxwell's_demon.svg

In a previous post I explored the decline of science as related to the decline capitalism. A large aspect of this decline is how the increase of informational complexity leads to marginal returns in knowledge. For example, the last revolution in physics appeared roughly one hundred years ago, with the advent of quantum mechanics and relativity. Since then, the number of scientists and fields have exponentially increased, and the division of labor has become increasingly more complex and specialized. Yet, that billion dollar per year experiment, the Large Hadron Collider, that was created to probe the most fundamental aspects of theoretical physics, has failed to confirm any of the new theories in particle physics. The decline of science is coupled to the decline of capitalism in general, as specialist and institutional overhead is increasing exponentially across industries, but GDP growth has been sluggish since the 1970s.

Right now across scientific fields there is an increasing concern for the overproduction of “bad science”.  Recently, the medical and psychological sciences have been making headlines, because of the high rates of irreproducible papers.  In even the more exact sciences, there is a stagnant informational bloat, with a flurry of math bubbles, theoretical particles, and cosmological models inundating the peer-review process, in spite of billion dollar experiments like the Large Hadron Collider not confirming any of them, with no scientific revolution (last one was 100 years ago) in the horizon.

There is no shortage of solutions being postulated to solve the perceived problem. Most of them are simply suggestions of making the peer review process more rigorous, and refining the statistical techniques used for analyzing data.  For example using bayesian statistics instead of frequentism, encouraging the reproducibility of results, and finding ways to constraint the “p-value” hacking. Sometimes some writers that are a little bolder would argue that there should be “interdisciplinarity”, or that scientists should talk more to philosophers, but usually these calls for “thinking outside the box” are very vague and broad.

However, most of these suggestions would simply exacerbate the problem. I would argue that the bloat of  degenerative informational complexity is not due to lax standards. To give an example, let’s analyze the concept of p-value hacking. A common heuristic in the social sciences is that for a result to be significant, it should have a p-value of less than 0.05. In layman parlance, this implies that your result has only 5 percent of probability of being due to chance (not exact definition but suffices for this example).  So now you established a “standard” that can be gamed in the same way lawyers can game the law. This creates a perverse incentive to game this rule, by researchers finding all sorts of clever ways of “p-hacking” their data so that it passes that standard. So in the case of p-value hacking, one can make conscious fraud by not including the data that raises the p-value (high p-values mean your results are due to chance), to unconscious biases like ignoring certain data points because you convince yourself they are a measurement error, in order to protect your low and precious p-value.

The more rigid rules a system has, the more is invested in “overhead” to regulate those rules and game them. This is intuitively grasped almost by everyone, and hence the standard resentment against bureaucrats that take the roundabout and sluggish way to accomplish something.  In the sciences,  once a an important study/experiment/theorem generates a  new rule, or “methodology”,  this creates perverse incentive loops where scientists and researchers use this “rule” to create paper mills, that will in turn be used to game citation counts . Instead of earnest research, you have an overproduction of “bad science” that amounts the gaming of certain methodologies.  String theory, which can be defined as a methodology,  was established as the only game in town a couple of decades ago,  which in turn constrained young theoretical physicists in investing their time and money in gaming that informational complexity, generating even more complexity. Something similar happens in the humanities, where a famous (usually french) guy establish a methodology or rule, and the anglo counter-parts game the rule to produce concatenations of polysyllabic words.   Furthermore this fetish of informational complexity in the form of method and rules, creates a caste of “guild keepers” that are learned in these rules and accrue resources and money without allowing anybody who isn’t learned in these methodologies.

This article serves as a “microphysical” account of what leads to the degenerative informational complexity and diminishing returns I associated with modern science in my previous post. However what would be the solution to such a problem? The answer is in one word: ergodicity.

As said before, science has become more specialized, complex, and bloated that ever before.  However, just because science has grown exponentially, it doesn’t mean it has become more ergodic. By ergodic I specifically mean that all possible states are explored by a system.  For example  a dice that is thrown a large amount of times would be ergodic, given that the system would access every possible side of the dice. Ergodicity has a long history in thermodynamics and statistical mechanics, where physicists often have to assume that a system has accessed all its possible states.  This hypothesis allows physicists to calculate quantities like pressure or temperature by making some theoretical approximations of the number of states a system (e.g. a gas ) has. However we can use the concept of ergodicity to analyze social systems  like “science” too.

If science were ergodic, it would explore all possible  avenues of research, and individual scientists would switch of research programs frequently.  Now, social systems cannot be perfectly ergodic, as social systems are dynamic and therefore the “number” of states grow (e.g. the number of scientists grow). But we can treat ergodicity as an idealized heuristic.

The modern world sells us ergodicity as a good thing. Often, systems describes themselves as ergodic as a defence from detractors. For example, when politicians and economists claim that capitalism is innovative, and that it allows all workers to have a chance at becoming rich (or a chance for rich people to become poor),  they are implicitly describing an ergodic system. Innovation implies that entrepreneurs experiment and explore all possible market ideas so that they can discover the best ones. Similarly, social mobility implies that a person has a shot at becoming rich (or if already rich, becoming poor) if that person lives long enough. In real life, we know that the ergodic approximation is really poor for capitalism, as the rich do often stay rich, and the poor will stay poor. We also know that important technological innovation is often carried out by public institutions  such as the american military, not the private sector. Still, the reason why ergodicity is invoked is that it is viscerally appealing. We often want “new blood” into fields and niches, and we resent bureaucrats and capitalists insulated from the chaos of the market for not giving other deserving people a chance.  

One of the reasons that ergodicity is appealing is that there is really no recipe for innovation except experimentation and exploring many possible scenarios.   That’s why often universities have unwritten rules of not hiring their own graduate students into faculty positions – they want “new blood” from other institutions. A common (although incorrect, as described above) argument against public institutions is that they are construed as often dull and stagnant in generating new products or technologies compared to the more “grassroots” and “ergodic” market. So I think there is a common intuition amongst both laymen and many professionals that the only sure way of finding if something “works” or not is trying different experimental scenarios.

Now let’s return to science.  The benefit of ergodicity in science  was indirectly supported the infamous philosopher Feyerabend. Before him,  philosophers of science tried to come up with recipes of what works in science or not.  An example is Popper, who argued that science must be falsifiable. Another example is Lakatos, who came up with heuristics of what causes research programs to degenerate. Yet,  Feyerabend argued that the only real scientific method is that  “anything goes” – he termed this attitude as epistemological anarchism. He argued that scientific breakthroughs don’t follow usually any hard and fast rules, and that scientists first and foremost are opportunists.

Feyerabend got a lot of flack for  these statements – his detractors accusing him of relativism and anti-scientific attitudes. Feyerabend didn’t help himself because he often was inflammatory in purpose and seeking to cause a reaction (for example putting astrology and science on the same epistemic level). However I would say that in some sense he was protecting science from dogmatic scientists.  To use the terminology sketched in the previous paragraphs: he ultimately was arguing for a more ergodic approach to science so that it doesn’t fall under this dogmatic trap.

This dogmatic trap was already explained in previous paragraphs: the idea that more methods, rules,  divisions, thought policing, and  rigour, would  always lead to good science.  Instead it leads to a growth of degenerative research  that amounts to gaming certain rules.  This in turn leads to the growth of degenerative specialists that are only experts in degenerative methods.   Meanwhile, all this growth is non-ergodic, because it’s based around respecting certain rules and regulations, which constrains the exploration of all possible scenarios and states. It’s like loading a dice so that always the six dots face up, in contrast to allowing the dice to land in all possible states.

How can we translate these abstract heuristics of ergodicity into real scientific practice? The problem with much of philosophy of science, both made by professional philosophers, or professional scientists unconsciously doing philosophy, is that it looks at individual practice. It comes up with a laundry list of specific rules of thumb that an individual scientist most follow to make their work scientific, including certain statistical tests and reproducibility. However the problems are social and institutional, not individual.

What is the social and institutional solution? Proposing solutions is harder than describing the problem. However  I always try to sketch a solution because I think criticism without proposing something is somewhat cowardly – you avoid opening yourself up to criticisms from readers.

The main heuristic for solving these problems should be on collapsing the informational complexity in a planned, transparent, and accountable way.  As mentioned before, this informational complexity is like a cancer that increasingly grows, and its source is probably methodological dogmatism, where complex overhead becomes bloated as researchers find increasingly more convoluted way of “gaming” these rules. Here are some suggestions for collapsing complexity:

  1. Cutting administrative bloat and instead have rotating academics in the essential administrative postings. 
  2. Get rid of the peer-review system, and instead use an open system, similar to Arxiv.
  3. Collapsing some of the academic departments into bigger ones. For example, there is more in common with much of theoretical physics, mathematics and  philosophy than between theoretical physics and some of the more experimental aspects of physics. So the departments should be reorganized so that people with more similarities interact with each other.
  4. Create an egalitarian funding scheme, based more on divisions between theory and experiment than between departments.  Everyone involved in the same category should receive the same, minimum amount of funding, where funding quantities are based on how much resources a specific type of work would realistically require.  For example, a theoretical physicist that uses only pencil, paper, and their personal computer, has financially a lot in common with a sociologist that does the same. 
  5. Beyond the  minimum funding outlined above, excess funding should be decided democratically, with input outside of professionals.
  6. Abolish the distinction between tenured professor and adjunct. Instead everyone should teach.

Hopefully the destruction of admin bloat and adjunct/tenure distinction would release resources that can  be spent on hiring researchers, instead of coming up with bad heuristics such as publication and citation numbers as filters for new hires.

Many of these recommendations cannot be seen in the abstract, since the University is intimately coupled to the society and the economy as a whole. For example, part of the admin bloat comes from legal liabilities and the state offshoring some of their responsibilities to universities.  Number 6 would require a radical reconfiguration of society in general. Number 6 wouldn’t be able to be enacted today, since “democratic” institutions  are  composed of non-ergodic, technocratic lifers. 

This takes me to the political conclusion that the problems of science should be seen as the problems of society as a whole.  The only sure way to find solutions for problems is an ergodic approach.  Right now, the state is non-ergodic, that is, its  occupied and controlled  by political and bureaucratic lifers.  These non-ergodic bureaucracies in turn generate informational complexity, as new regulations and “rules” are imposed by the same caste of degenerative professionals, which in turn requires even more complex overhead. Instead,  the State, (and in a socialist society, the means of production) should have a combination of democratic and sortition mechanisms that makes it impossible for individuals to stay too long in power. This democratic vision should be supported by broad and free education programs that train individuals with the sufficient knowledge required to rule themselves in a republican way. Not only is this method guarantees more equality, but it also  turns society into this great parallelized computer that solves problems by ergodic trial and error, through the introduction of  new blood, sortition and democratic accountability.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The Decline of Science, The Decline of Capitalism

pnhp-long-setweisbartversion-52-638

Can another Einstein exist in this era?  A better question  is whether  the spirit of his  research program could emerge again in our current predicament. By his research program, I mean the activity that grasped through a few thought experiments and heuristics fundamental principles that not only revolutionized physics but our whole ontology in general. Through a combination of imagination and mathematical prowess,  such as imagining himself riding a lightning bolt, and then translating this imagination into the language of geometry, he revolutionized our most fundamental intuitions of space and time.

Fast forward a hundred years later, where physics has become increasingly specialized and fractal-like,  with theoretical physics atomized across many sub-disciplines. Given this complex landscape, there is simply not enough bandwidth to  engage the informational complexity of all relevant fields in order to grasp at something both holistic and fundamental. Instead, scientific knowledge is atomized among various disciplines.  Yet, although this division of labor and increased informational complexity has a legitimate logic, as many fields  truly become more specialized and complex in a useful, authentic sense – this complexity has decreasing marginal returns. We can see this effect in some of the paper mills of theoretical physics, with theory after theory that may only have tenuous links with the facts of the world.  At some point, the complexity and literature grew exponentially, engulfing empirical confirmation.

One of the most striking example of the diminishing returns of complexity is the lack of revolutionary shifts in theoretical physics. The  last major physics revolutions, quantum mechanics and relativity, happened roughly a hundred years ago. This is in spite of the huge increase in the number of scientists and disciplines throughout the last century. There is no shortage of models and theories, yet the creation of novel predictions and empirical confirmation is slowing down, as evidenced by the inability of expensive particle physics experiments to confirm any of the new particles conjectured by the last generation of theoretical physicists.   In other words, to use Lakatos’ ideas, theoretical physicists is degenerating, because there is an exponential increase in informational complexity without much empirical content backing it. In short, all the new and expensive scientists, computers, theories (e.g. supersymmetry, string theory) and cryptic fields are generating diminishing returns in knowledge.

However, not only academic sciences are degenerating. In this stage of capitalism, the degenerative research program is universal. This universal research program includes all relevant fields of human inquiry and knowledge. Therefore, this degeneration not only exists in the apex of academia, but it dwells in any institution meant for problem solving.  We find a decrease in productivity across many industries and the economy as a whole, which signals diminishing returns in complexity. In all these parts of society there is an increase of expensive complexity that yields diminishing returns.  Since all these institutions are problem-solving,  and use some sort of method/episteme, we can say that their theories of the world are degenerative, in analogy to the Lakatosian concept of degenerative research program. In spite of their bloat in specialists, the marginal returns in the “knowledge” necessary for production decreases.

Perhaps the most incredible aspect of this decline is the existence of experts in almost wholly degenerative methods.  As degenerative methods exponentially increase in volume – methods that don’t have much empirical backing, the informational complexity needs more specialists to manage it, and these  experts are almost specialized entirely on these decaying methods. Economists and string theorists are the quintessential examples of degenerative professionals.

This degeneration of the universal research program, and with it, the creation of a degenerative caste of professionals  has not come unnoticed by the population. This decline has probably fueled part of the anti-intellectual and anti-technocratic wave that brought Trump to power. For example,   people often complain about the increased inaccessibility of academic literature, with its overproduction of obscure jargon. Another example is the knee-jerk hatred for administrators, managers, and other technocratic professionals that are seen as doing increasingly abstracted work that may not connect with what is happening at the ground. For instance, a common target of criticism  for this phenomenon is the admin bloat that festers at universities.

This abstract process of the degenerative research program is linked to the health of capitalism, in a two way feedback loop, given that it is through problem solving that capitalism develops technological and economic growth.  Perhaps we can understand the health of capitalism better by referring to the ideas of the anthropologist Joseph Tainter. Tainter argues that societies are fundamentally problem solving machines, and that they add complexity in the form of institutions, specialists, bureaucrats, and information in order to increase their capacity to solve problems in the short term. For example, early irrigation systems in Mesopotamian civilizations, crucial for agriculture and therefore survival, created  their own layer of specialists to manage these systems.

However complexity is expensive, as it adds more energy/resources usage per capita. Furthermore, the problem solving ability of institutions diminishes in returns as more expensive complexity is added. At some point, complex societies end up having a very expensive layer of managers, specialists, and bureaucrats that are unable to deliver in problem solving anymore.  Soon, because the complexity is not making society more productive anymore, the economic base, such as agricultural output, cannot grow as fast as the expensive complexity, making society collapse. This collapse resets complexity by producing simpler societies. Tainter argues that this was the fate of many ancient empires and civilizations, such as the Romans, Olmecs, and Mayans. So Tainter here is arguing for a theory of decline of the mode of production, where modes of production are “cyclical” and have an ascendant and descendant stage. Using this picture, we can begin  to identify a stage of capitalism in decline.

This decline of capitalism has plenty of empirical evidence.  “Bourgeois” think-tanks like the Brookings Institute argue that productivity has declined since the 1970s. Marxist economists like Michael Roberts assert that the empirical data shows that the rate of profit has fallen since the late 1940s in the US.  Not to mention the recent Great Recession of 2008. However this economic and material decline is linked to the degenerative research program, as the expensive complexity of degenerative institutions expands faster than the economic base (e.g. GDP). For example, the exponential grow of administrators in healthcare and university at the expense of physicians and professors is symptomatic of this degeneration.

The degeneration of the universal research program  has two important consequences. First, that a large part of authority figures that base their expertise on credentials are illegitimate. The reason is that if they are part of a degenerative caste of professionals (politicians, economists, etc.)  so they cannot claim authority on relevant knowledge because their whole method is corrupted. This implies that socialists should not feel intimidated by the credentials and resumé of the technocrats closer to power. As mentioned before, right wing populists such as Trump understand partially this phenomenon, which has unleashed his reactionary electorate against the “limousine liberals” and “deep-statists” in Washington D.C. It’s time for us socialists to understand that particular truth, and not be afraid to counter the supposed realism and expertise of the neoliberal center.  The second consequence is that our methods of inquiry, such as science or philosophy, has  stalled. Instead, the feed-back loop of complexity creates more degenerative specialists that are experts in an informational complexity that has tenuous connection with the facts of the world. Whole PhDs are made in degenerative methods – for example, scientists specializing in some particular theoretical framework in physics that has not been validated empirically.

What is the socialist approach to the degeneration of the research program? Although one cannot say that socialists will not suffer from similar problems, given that informational complexity will always required when dealing with our complex civilization, Capitalism has particularly perverse incentives for degenerative research programs.   For example, the way the degenerative research program survives is through gate-keeping that safeguards the division of labor by well paid and powerful professionals. An obvious example is current professional economics, which largely requires an absorption of sophisticated graduate level math in order to enter the profession, even if those mathematical models are largely degenerative. In the political landscape at large, the State is conformed by career politicians and technocrats, who safeguard their positions through undemocratic gate-keeping in the form of elite networking and resumé padding.  The rationale for this gate-keeping is that these rent-keepers accrue power and wealth  through the protection of their degenerative research programs. Furthermore capitalism accelerates the fracturing of division of labor as it pursues short-term productivity at all costs, even when this complexity in the long term becomes expensive and a liability. 

The socialist cure to the degeneration of the research program could consist of two main ingredients. First, that institutions that command vast control over society and its resources should democratize and rotate their functionaries and “researchers”.  For example in the case of the State, a socialist approach would eliminate the existence of career politicians by putting stringent term limits and making many functionaries, such as judges, accountable to democratic will. Since there are diminishing returns in knowledge through specialization and informational complexity, a broad public education (up to the university bachelor level) could guarantee a sufficiently educated body of citizens so that they can partake in the day to day affairs of the State.  Instead of  a caste of degenerative professionals controlling the State, an educated body of worker-citizens could run the day to day affairs of the State through a combination of sortition, democracy, and stringent term limits.

The second ingredient consists of downsizing much of the complexity by focusing on the reduction of the work-day through economic planning. Since one of the main tenets of socialism is to reduce the work-day so that society is instead ruled by the imperatives of free time as opposed to the compulsion of toil, this would require the elimination of  industries that do not satisfy social need (finance, real estate, some of the service sector, some aspects of academia) in order to create a leaner, more minimal state.  Once the work-day is reduced to only what is necessary for the self-reproduction of society, there will be free time for people to partake in whichever research program of their choosing. Doing so may give rise to alternate research programs that don’t require the mastering of immense informational complexity to partake in. Perhaps the next scientific revolution can only arise by making science more democratic and free. This vision contrasts to the elitist science that exists today, which is at the mercy of   hyper-specialized professionals that are unable to have a holistic, bird’s eye view of the field, and therefore, are unable to grasp the fundamental laws of reality.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

On Hegel and the Intelligibility of the Human World

Screen Shot 2018-05-02 at 11.37.55 AM

I’ve been studying Hegel lately because I find a value in his idea that history has an objective structure and is intelligible.  He argued that History is rational, and therefore its chain of causes and effects can be understood by Reason. I deeply believe in the intelligibility of history and the human world at large, as I advocate for the human world to be  administered in a planned and democratic way, which requires the possibility of scientific understanding. In contrast, many contemporary thinkers are extremely skeptical about the intelligibility of the human world. For example, many economists proclaimed that socialist planning is flawed because the supply and demand of goods cannot be rationally made intelligible to planners. We see similar arguments from the Left in the form of post-structuralist attacks against the  “master narratives” that seek to unearth the rational structure of the human world. For example, contemporary criticisms of the Enlightenment sometimes argue that the same reason used to understand the world is used to dominate human beings, because Reason starts to see humans as stacks of labor power to be manipulated for some instrumental end.

However in my opinion, to deny the intelligibility of the human world,  or to deny that this intelligibility can ever be used for emancipation, is to deny the possibility of politics, for political actors must have a theory of where history is marching, in other words,  “in what direction does the wind blows”. Political agents need to ground themselves in a world-theory so they can suggest a political program that would either change the direction of history to another preferred  course, or enhance the direction that it is undertaking right now. The IMF, Breton-Woods, the Iraq War, the current austerity onslaught, etc. have or had an army of politicians, intellectuals, and technocrats wielding scientific reason, trying to grasp where the current of history flows, and developing policy in line with their world-theory.  In lieu of our “enemies” (capitalist state, empire) using a scientific understanding of history in order to destroy the world, I will attempt to instrumentalize my reading of Hegel in order to make a case of a socialist intelligibility of the human world, that has the purpose of freeing humanity through the use of socialist planning. I am however, not trained in philosophy, so my reading of Hegel may not be entirely accurate – yet accuracy isn’t really my goal as much as using him as an inspiration for making my case.

Hegel and many  thinkers in the 19th century were optimistic about uncovering the laws of motion that drive history, and thus the evolution of the human world.  Hegel thought that history was intellectually intelligible in so far that it is can be rationally understood as marching in a certain rational direction, that is, towards freedom even if the human beings that make this history are often driven by irrational desires.  For example, Hegel thought the French Revolution, following the evolutionary path of history, brought about the progress of freedom in spite of its actors being driven by desires that may have concretely nothing to do with freedom (e.g. glory, self-interest, revenge, etc.).  To Hegel, the French Revolution was a logically necessary event that follows accordance to a determined motion of history towards freedom. In parallel, Marx, who “turned Hegel on its head” thought that the human world could be understood as functions of the underlying economic structure (e.g. capitalism or feudalism) and its  class composition. Furthermore Marx argued that the working class, due to its objective socio-economic position as the producer of the world’s wealth, could bring about socialism.

Not only were Hegel and Marx optimistic in the intelligibility of the human world, but they found that a liberated society would make use of this intelligibility to make humans free. In the case of Hegel, he thought that the end of history would be realized by a rational State that scaffolds people’s freedom by making them masters of the world they can understand and manipulate in order to realize their liberties/rights. This is why Hegel thought the French Revolution revealed the structure of history, as this event  demanded that the laws of the government become based on reason and serve human freedom. In the case of Marx and his socialist descendants, the fact that the economy is intelligible means that a socialist society could administer it for social need, as opposed to the random, anarchic and crisis ridden chaos of capitalism. The socialist case for the intelligibility of the human world gave rise to very ambitious and totalizing political programs, with calls for the economy to be planned for the sake of social need, and with the working class as the coherent agent for enacting this political program. Sometimes these socialist totalizing narratives are described by some marxists as programmatism,  where programmatism is the phenomenon of coherent socialist parties that have grandiose and ambitious political programs of restructuring the world through the universal agency of the working class.

However,  from the 20th century onwards, much of  intellectual activity was spent in arguing against this intelligibility of the human world, and therefore against the totalizing socialist program. In the economic sphere, Hayek argued that the economy was too complicated and fine-grained to be consciously understood by human actors, therefore making conscious economic planning an impossibility. From the Left, post-structuralist theorists attacked  the idea that there exists underlying, objective structures that steer and scaffold the human world. Philosophers such as Laclau and Lyotard criticized nineteenth century thinkers such as Marx and Hegel for having totalizing narratives of how history marches and the certainty of scientific approaches to the world. In many ways these post-structuralist and marginalist views do reflect a certain aspect of the current political landscape.  The market in the West has considerably liberalized since World War II, expanding the roles price signals in directing the distribution of goods, which seem to echo Hayek’s propositions. In western-liberal democracies, electoral politics is often interpreted as a heterogenous and conflicting space formed of different identities and interest groups, pushing their own agendas without a discernible universal feature that binds them all – which echoes the post-structuralist attack against Marxist and Hegelian appeals to universalism. Furthermore, the decline of Marxism, anarchism, and other radical political movements that posited a coherent revolutionary actor, such as the working class, give even more credence to the post-structuralist insistence on how the social world cannot be made intelligible by totalizing and “scientific” theories.


However these attacks on human-world intelligibility miss a crucial point, which  makes the critique fatally flawed. These attacks only feature as evidence for their arguments  the ideological justifications of the ruling class and the defeat of the programmatic Left. It is true that Hayekian marginalism is used as “proof” that the economic world is not intelligible to the human mind, therefore justifying increasing neoliberalization. Or that the totalizing social movements of the early 20th century with coherent political programs and revolutionary subjects have been almost completely supplanted with heterogenous, big-tent movementism. Yet the ruling classes – those who control the State, still act from the perspective that the human world is intelligible. The State’s actors cannot make political interventions without assuming a theory on how the human world works and having a self-consciousness on their own function of how to “steer” this human world into  a specific set of economic and social objectives. For example, the whole military and intelligence apparatus of the United States studies scientifically the geopolitical order of the modern world in order to apply policy that guarantees the economic and political supremacy of the American Empire. Governments have economic policies that emerge from trying to understand the laws of motion of capitalism and using that understanding to administer the nation-state on a rational basis.

The skeptics of the intelligibility of the human world could protest in different ways to the above assertions. One of the protestation could be that existence of the technocratic state still does not reveal some universally, coherent ruling class. In other words, there is no bourgeoisie, “banksters” or other identified subjects that control the technocratic state for some identifiable reason   – ithe State is simply some autonomous machine with no coherent identifiable trajectory or narrative. Furthermore, a second protestation is inherent in some interpretations of Adorno’s and Horkheimer’s Dialectic of Enlightenment: to make the human world intelligible to science is a method of domination, where human beings can be instrumentalized into stacks of labor power to be manipulated and administered.  Furthermore, according to this criticism of Enlightenment, those particularities that might not be scientifically uncovered in the human world, are forced to violently fit certain universal – for example, the Canadian violence done unto First Nations where they attempted to “anglicize” First Nations violently by abusing and destroying them in Residential Schools.

Curiously this second protestation, the one of how rationality is used to scientifically dissect the human world to dominate it, shows the weakness of the whole counter-rational project. The ruling classes do make the human world intelligible for domination, through their technocrats, wonks, and economists.  However the key idea here is that they administer the world in the name of some objective that does not treat social need as its end. The behavior of the State does indeed show that the human world and history are intelligible – it’s just that its intelligibility is instrumentalized in favour of some anti-human end. In reply to the first protestation, about how it is impossible to recognize a universal subject and the end the technocratic state pursues, I will say that the complexity of world capitalism does not imply there are no dominant trends in it that cannot be analyzed. It just happens that systems experiences various tendencies, some in conflict with each other, but that can be still understood from a bird’s eye view and scientifically. For example, one of the key trajectories of the modern capitalist state is the safeguarding the institution of private property and attempting to stimulate capital accumulation (e.g. GDP growth) – this is certainly an intelligible aspect of modern world history.   The existence of conflicting trends within the State that counter the feedback of capital accumulation, such as inefficiencies caused by rent-seekers and corruption, only means that the State (and the human world) are complex systems with counteracting feedback loops, not that these objects cannot be made intelligible by scientific reason in order to understand them and ultimately change them.

The existence of contradicting feedback loops embedded in a complex system is not an argument against the scientific understanding of the human world. One can still try to understand the various emergent properties even if they contradict each other.  For example, a very politicized complex system today is the climate. Although we cannot predict the weather, that is the atmospheric properties in a ten square kilometers patch during a specific day, we can predict the climate, that is the averaged out atmospheric properties of the whole Earth during tens of years. For example, we have very good idea how the average temperature of the Earth evolves.  In the case of the human world, the same heuristic applies – we cannot understand everything that happens at the granular level but we can have ideas about the average properties integrated throughout the whole human world.  Similarly, the climate  system has counteracting feedbacks, for example, clouds may decrease the temperature of the Earth by reflecting solar radiation into outer space, but at the same time heat up the Earth through the greenhouse effect of water vapour.

These contradicting feedbacks does not make the climate system incoherent to science. Similarly, the existence of various subjects with conflicting interests in capitalism does not mean that there cannot be dominant trends, or some sort of universality underlying many of the subjects.  At the end of the day, the basic human needs, such as housing, education, and healthcare are approximately universal.

The fact that the human world is intelligible and this intelligibility is instrumentalized by our enemies, that is the capitalists, the military apparatus, and the technocratic state, in order to exploit and degrade the Earth and its inhabitants for capital accumulation,  means that we should make use of this instrumental reason to counterattack, not just pretend that this Reason is incoherent or that it is a tool that corrupts its user. In fact, there are many examples where instrumental reason is used for “good”, for example, the concerted medical effort of curing certain diseases, which makes the human body intelligible in order to understand it.  At the same time, in a Foucauldian sense, it is true that the clinic can be used for domination but this power dynamic is just one feedback loop amongst other more positive ones, such as emancipating humanity from the structural obstacles of disability and disease. Thus, universal healthcare is proof of the use of instrumental reason for the purpose of human need/emancipation.

The usage of instrumental reason for social need and freedom harkens back to Hegel. The world Hegel promised us at the end-point of history,  that is the world of absolute freedom, is the world where human beings become conscious of the intelligibility of history, and therefore they rationally administer history in order to serve  well-being and freedom. The only problem with Hegel’s perspective is that he thought history marched in a deterministic sense towards freedom. Instead, to make history and the human world intelligible for human needs is a political decision that is not predetermined by the structure of history itself.  Until now, the historical march of the last couple centuries have been for increasing domination of the Earth and its inhabitants for the purpose of capital accumulation. However, in the same way the ruling classes make history intelligible in order to serve profit and private property, there is no necessary reason or law that prevents using the intelligibility of history for social need.  The socialist political program is precisely this – to make the human world transparent to science and reason in order to shape it into a free society that is dominated by human creative will, as opposed to the imperatives of toil and profit.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.