A Cold War We Should Avoid

The cold war I am referring to is the one brewing between the US and China.  From the American side it is tooted as the competition for economic (and not only) supremacy in the 21st century.  From the title, you can guess that I am not in favor of waging this war.  It will be damaging to the world and it may not produce the results American policy makers expect.

The US won the previous cold war with the Soviet Union because, while maintaining military parity, the US enjoyed an unrivaled economic superiority.  China though is a whole different opponent.  China’s leaders have successfully combined Communist Party rule with the power of the markets to lift China out of poverty and turn it to an economic superpower.  Economic heft has also allowed China to raise its international profile and make up for past humiliations in the hands of foreign powers.  Achieving and maintaining economic well-being and significant presence in world affairs are the two-fold objectives of China’s nationalist policy.

But China wants to go beyond these legitimate goals.  It is China’s new ambitions that have sounded alarms in the US.  The Belt and Road Initiative (also known as One Belt, One Road) was announced in 2013.  It aims at investing economically, culturally and educationally in a long list of countries and connect China to the world economy as the Silk Road connected China to Europe in the late Middle Ages.  This initiative has already given China a non-trivial presence in the global supply chain of raw materials and transport.  Two years later in 2015, China announced the Made in China initiative that aims at making China self-sufficient in all critical high technologies and a leader in Artificial Intelligence by 2025.  It is exactly the potential of losing not only superiority but also independence in cybertechnology (including AI) that defines the new cold war from an American standpoint.

So, how has America done in the years that China was turbocharging its economic growth following its admission to the WTO in 2001?  Not very well.  This twenty-year period exposed America to the risks of its capitalist model and its unsettled racial, social and political divisions.  The list is sobering: Corporate scandals spearheaded by Enron and WorldCom; collapse of the housing market and near-failure of the financial system; intense polarization leading to the Trump presidency and the questioning of democratic institutions, including the integrity of elections. 

The fall-out has been dramatic and consequential.  It includes the deindustrialization of the Rust Belt, the loss of well-paying manufacturing jobs, the decline of towns and communities, the “deaths of despair” from opioids, the squeezing of the middle class, and the exacerbation of income and wealth inequities.  The pandemic provided further evidence of the devastating consequences of unequal health care and job security in America.    

Even before the 1990s, it was America that from a position of strength had evangelized a liberal international economic order to the world.  We now realize that this policy was not well-thought out.  The paradigm of achieving aggregate GDP growth and then expecting firms and workers to adjust to the evolving realities proved to be overrated.  Big firms were able to move abroad to exploit reservoirs of cheap labor but their laid off workers were left without support for retooling and reentering new jobs that paid as well as those lost.  The low unemployment rates of recent years fail to reveal the paucity of good jobs for blue-color workers.    

Workers were not the only piece of the economy affected by globalization.  The productive side of the country was also affected by the new geography of manufacturing sectors.  In a globalized market, each firm is interested in securing its production factors from the lowest-cost producers.  Minding the national or the geopolitical interests of the home country is not necessarily high in firm strategy.  A free global market is non-threatening as long as there are no nationalist agendas.  US administrations and politicians now realize that China’s nationalist economic agenda threatens the American national interests.  The mistrust is fueled by fears that China may use its cybertechnology industries to acquire access to information and manipulated it in order to further the interests of its centralize communist political system.  Such capabilities can become even more worrisome if coupled with mastery of Artificial Intelligence. 

Thus, to counter China’s inroads into these high-tech sectors, American policy makers are coming around to two important compromises.  They are retreating from their internationalism and are embracing what has sounded like an anathema, that is, industrial policy.   Both compromises though pose serious challenges.  Adopting an industrial policy carries the risk of institutionalizing crony capitalism by extending political favoritism to well-connected firms.  To ensure that an industrial policy serves legitimate national interests requires that the American political system accepts and learns to run joint projects between the state and the private sector.   Given the inordinate influence money can buy in America, adopting an industrial policy that truly serves the national interest may prove to be too high a mountain to scale.  

Forcing American firms to repatriate their operations will put pressure on them to contain the higher labor costs either through robotic technology or by suppressing wages or workers’ rights, like the right to form unions.  If so, the scarcity of good jobs for blue-color workers is likely to persist.  With the prospect of sanctions and counter-sanctions much is also at stake for the global American businesses in social media and banking, which will, therefore, be unwilling to countenance to a retreat from lucrative foreign markets.  That’s an additional challenge to policy makers in government and Congress.

Even if the American retreat from globalization comes to pass it is unlikely it will hinder China’s economic advancement.  In the next several decades, population growth in regions, like Africa, will be much higher than in the developed world.  And PriceWaterhouseCooper forecasts that between now and 2050 emerging economies will grow at twice the rate of developed economies.  These trends present China ample opportunities to grow without full commercial ties to America or Europe.   And it is doubtful to what extent Europe will damage its own interests in the huge Chinese market out of solidarity with the US.   

For all the above arguments America’s best approach may be one of competition with proper respect for vital national interests without resorting to fruitless and damaging antagonism.  The remarkable story of America and China is that two superpowers have established a rare interdependence built on the common ground of commerce.  It is in their mutual benefit to work out their national interests without sliding into a damaging cold war.  

Humans and The Environment

If an accidental visitor (let’s use the name Siya*) to planet earth took a good tour of the globe would easily come to the conclusion that the planet had been made to serve only its human species. 

A quick search would have shown that there are over 7 billion humans living in practically every corner of the planet; that they are surrounded by billions of domestic animals that provide companionship or food; that this trio of mammalian mass (of which 36% is human) comprises 96 percent of the total mammalian biomass, with wildlife mammals making up the remainder four percent.  Our visitor’s best chance to see some of this scant wild life would be a visit to a zoo rather than an excursion to the wild.

And not to forget our feathered friends, Siya would find that domesticated fowl, like chicken, turkeys, ducks, outnumber wild birds by a factor of three.  By one estimate, humans consume over 50 billion chicken, 1.5 billion pigs, 500 million sheep and 300 million cattle each year.

If Siya’s assignment of reporting back home included a historical account of how humans became the masters of planet earth, our space traveler would be astonished to discover that things did not start that way nor did humans ruled the planet for hundreds of thousands of years.

First, Siya would learn that homo sapiens was the only lucky variety of hominids to survive on earth with a smitten of Neanderthal and Denisovan DNA.  Once homo sapiens secured their place on earth, they set out to inhabit the planet with devastating results for all other species.  Siya would learn the extinction came in three waves.**  The first wave started some tens of thousands of years before the agricultural revolution.  The second wave came with farming (about 11000 years ago) and the third with the industrial revolution.  Each wave was more consequential than the previous one as it wiped out more animal species, brought others closer to total annihilation or to the point of no return. 

As long as homo sapiens developed mostly biologically just like the rest of the animal world the ecological balance was kept in a fair equilibrium.  Things changed though when homo sapiens passed the cognitive threshold that gave humans the ability to intervene in the natural environment, change it and design it to serve their interests.  Thus, every leap of cognitive and technological advancement of homo sapiens has resulted in further retrenchment of the fauna and flora of planet earth. 

How could we explain to Siya why humans separated their lot from that of nature?  Part of the explanation is that, unfortunately for the environment, human thought, secular or religious, was late in developing a nature-friendly ethical code.  Classical ethical philosophy as well as Western monotheistic religions were more preoccupied with morality among humans than human morality toward nature.***  We find more concern for nature in Eastern religions like Buddhism and Hinduism than in the Abrahamic faiths with their anthropocentric views.  Nature is also more revered in animism and paganism which attribute divine or spiritual powers and properties to nature (animals, rivers, oceans, celestial objects, etc.).  Indeed, we see this still reflected in the interaction of Indigenous people with nature.  A United Nations study has found that lands managed by Indigenous people have healthier ecosystems than lands conserved by governments.

We all have heard of Thomas Malthus and his prediction that the exponential population growth relative to the slower growth of food production would bring the destruction of human race.  Contrary to Malthus’s prediction, humans proved very smart and innovative in extracting from nature ever greater yields of food.  Malthus would have been closer to the mark had he theorized that the true danger to humans and their environment was not as much population growth as the “malady of infinite aspirations” as Emile Durkheim (the modern father of sociology) called the tendency to develop endless wants.  John Maynard Keynes would also later warn us of the dangers of aspirational wants.

Our visitor Siya would notice that humans are still driven by hubris about their ability to manage nature and an impervious sense of infinite expectations about the capacity of nature to sustain human life with abundance.   Siya would discover that humans count progress and keep a score card that only accounts for the satisfaction of their material needs irrespective of what happens to the rest of life on earth.  Many derive confidence from the belief that divine providence in the sustenance of humans will last forever because of some covenant struck between them and their God.  Others simply push aside all troubling thoughts of an ultimate catastrophe because they are unable to suppress their greed for material gratification.  And others simply don’t believe in the science of environment and climate.

But Siya would also notice something else.  That humans are not only selfish toward nature.  They are also selfish toward each other.  Siya would observe small numbers of humans living in superb luxury and gluttony and many living in appalling conditions.  So Siya would come to the sobering conclusion that the plunder of nature is not committed in the interest of all humanity but, to a great extent, for the pleasure of few.  And yet, when the earth’s ecosystem suffers all humanity pays the price whether rich or poor.  Actually, the poor pay the heavier price given the hierarchical order of affordability.  Let’s call this the negative externality of redundant wealth accumulation.

By the end of this trip around planet earth, Siya would read UN reports, government policy papers, scientific papers, newspaper editorials and op-ed columns.  One of such report from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services of the UN stresses the need for transformative changes in technological, economic and social factors if we are to arrest any further deterioration of the earth’s ecosystem.  Siya could promise to send us her assessment as soon as she had returned home.  Given though that Siya’s planet is 100 years away from earth, Siya would have said “I ‘m sorry, but till then you are on your own and don’t forget the clock is ticking.” 

* Since I have no idea how that foreign planet identifies a person as male or female, I chose to name my fictitious visitor Siya from the genderless pronoun siya used in Austronesian languages.

** The three waves are from Yuval Harari’s Sapiens.  Harari calls homo sapiens an ecological serial killer.

*** This gap between religion and human duty toward nature has been recognized the last thirty years and one outlet for those interested is The Yale Forum on Religion and Ecology.  

Boredom: The Good and the Bad

Sickness and death from Covid 19 were visited upon many, but a whole lot more of us were afflicted by boredom.  Whether rich or poor, young or old, living in the countryside or in a city we were left with empty holes of time in our lives, holes we could not fill with meaningful activities.  And most of us I bet cursed our boredom time and again and scornfully cast it in the heap of things we loathe.

But read more about boredom and you may start having a change of heart and mind.  The New York Times had an article about boredom during the pandemic, but it looked at it narrowly as the emotion that might have changed our consumption habits in temporary or even more permanent ways.  It so happened though that around this time I was reading a book about work by the anthropologist James Suzman,* and in its pages I discovered another perspective on boredom; a perspective that is more positive and informative.

To be sure, boredom can be the mother of some bad things.  It can drive people to alcohol or drug abuse, others to binge-eating, and others yet to binge-buying (look at Amazon’s sales).  Boredom as an emotional condition can be associated with chronic depression, or inability to pay attention to things we do, or a lack of capacity to find meaning in anything.  Boredom is leisure time that goes bad.  To be bored means to be self-aware and unfulfilled.  The 19th century philosopher Arthur Schopenhauer considered boredom to be a reminder of the meaninglessness of human existence.  Wow!

For most of us, however, the boredom we have been feeling during the pandemic is the result of the lockdown that replaced hours of work, socialization, recreation, entertainment and traveling with hours of – should we say nothing?  Full of energy and desires but nothing to do and nowhere to go.  Just the state of affairs that fits Leo Tolstoy’s definition of boredom: The desire for desires.

To have desires we must be aware of things we can do.  We must have experienced pecuniary or other pleasurable activities we desire to pursue.  And this brings us to the interesting questions about boredom.  What happens, for example, if you have a singular goal in life and you achieve it?  I knew nothing about Timothy Kim, but looking for answers to this question I came across his case.  Timothy always wanted to be vastly rich.  At age 31 he became a billionaire with his platform TubofCash.com and then he confessed he was bored!  Who runs the higher risk for boredom, a rich or a poor person?  Rich people can satisfy a lot more desires than poor people.  Does that make them more prone to boredom than poor people?  Well, I suppose it depends on whether boredom is a relative or an absolute emotion.  If it is relative, rich people suffer worse because they are deprived of relatively more pleasures than poor people.  But if boredom is absolute then rich people suffer less because they can still do some things (like ordering food from good restaurants) not available to poor people.

Instead of comparing contemporary rich and poor people, let’s compare ourselves to our hunter-gatherer ancestors.  Contrary to the popular belief that hunter-gatherers were struggling every minute of their lives to secure food and stay alive, they actually managed to be well-fed and sheltered with no more than two hours of work a day.  Remarkably then they had a lot more leisure time in their hands than their descendants who had the misfortune to discover farming and millennia later the world of machines and the intense work culture the industrial revolution brought us. 

How did they spend those hours of leisure?  The anthropologists tell us, they sat around the camp fire, slowly learning the art of socializing, mediating frictions among clan members, finding some ways to entertain each other, and eventually developing the tool that would make these activities more communicative, that is, language.

With all this free time, they had to feel bored at some point.  But we can safely guess not as much as we do. Their lifestyle was simple and their basket of desires was really small and shallow.  Not only they were able to meet their material needs very reliably and plentifully, they also had no serious desires for social status to pursue.  Their egalitarian social structure made sure that those with the potential or intention to acquire a higher status were brought back into line through shaming.

Nonetheless, boredom must have been too much to some hunter-gatherers that it set their minds loose to explore ways to escape it.  One of these mind-wandering moments discovered artistic expression.  Representational art in primitive sculptural form appeared 70,000 to 90,000 years ago, and the first cave wall paintings about 35,000 years ago.  Even Homo Erectus, our ancestor of 600,000 to 800,000 years ago, took the time (because they had plenty of it!) to put some aesthetic finish in the stony tips of their spears that was not all that necessary to their effectiveness.  Thus, boredom brought out of leisure might have possibly been the impetus for the emergence of art and language.

The comparison of our lives to those of our foraging ancestors then suggests that the negative consequences and the intensity of boredom are another curse of our contemporary life-style and civilization.  Aware of the countless experiences and pleasures that are open to us and with our bottomless basket of desires, boredom becomes so much more salient and unbearable.

Back then, thousands of years ago, when time was in abundance and not the precious good it is today, boredom played its evolutionary role by giving our innate trait of curiosity an outlet to imagination, creativity and pursuit of meaning.  Eventually though, available free time and its offspring boredom conspired to push us into food producing methods and social structures that bonded us to work, generated novel experiences and gave us a world of desires from which it has become almost impossible to escape. 

Thus, that ancient boredom that gave us the innovations that took us beyond our basic desire to just stay alive with food and shelter is now responsible for all the discontent its modern version visits upon us today.

* James Suzman, Work: A History of How We Spend Our Time, 2020.

What’s In Fifteen Dollars

Some numbers have the power to capture the public attention and become symbols of fears, reality or aspirations.  So 13 terrifies us; 1% reminds us of economic inequality; and $15 is the battle cry for the minimum wage in America.  This post is about that last number.  In fact, it’s more than a number.  It’s about real lives. 

First, I got curious as to what the minimum wage is in other economically advanced countries.  I found that in 2020 the minimum wage in the UK, Germany, France, Holland, Belgium and Ireland was over 1500 euros compared to just over 1000 in the US.  That’s a considerable discrepancy, so I had to look at the other side of the equation, unemployment.  From 2017 to 2019 (prior to the pandemic) average annual unemployment was around 4% in Holland, 3.5% in Germany, and 4% in the UK, that is, about the same as the 4% rate of the US.  Spain and Italy with a lower minimum wage than those countries had unemployment rates over 10%.  Denmark, Finland and Sweden do not have a minimum wage and yet their unemployment rate of about 6% was higher than in countries with a minimum wage. 

Though simplistic, these data tell me that the often-heard argument of a direct relationship between minimum wage and unemployment rate is not a slam dunk.  Numerous economic studies also fail to come to a uniform conclusion regarding the link between minimum wage and employment.  Now that we have thrown cold water on this debate-killing argument, let’s proceed with the rest of the story. 

The term minimum wage usually means an administratively set minimum price for labor. The federally set minimum wage has stood at $7.25/hr since 2009.  Its purchasing power today is clearly below that level.  When we consider that tax rates are adjusted for inflation to avoid higher taxation for high incomes and Social Security benefits rise with inflation, it becomes harder to argue against a minimum wage adjustment to protect its purchasing power.  So, that’s one point to keep in mind.  And here is another one.  The Congressional Budget Office estimates that raising the minimum wage to $15/hr will cost workers 1.4 million jobs.  This translates to a $10.15 million* of hourly income loss for workers.  But there are 27 million workers that make less than $15/hr and, hence, even a one-dollar raise of their wage translates to a $27 million hourly gain, much greater than the loss.  In a society that fetishizes aggregate income growth regardless of its distribution, the contemplated rise of the minimum wage sounds like a big winner.     

To critics, an administratively set minimum wage violates the law of demand and supply for labor.  But the labor market is full of distortions.  How do we explain, for example, barriers to entry into some professions (doctors, dentists and lawyers come to mind) that boost their wages by suppressing supply?  Or administrative requirements for an expert’s opinion (engineers, architects) even for small and mundane projects that help increase demand for such services and, hence, wages.  Not only these arrangements violate competition; they aim at elevating lifestyles from middle-class to upper-middle class or even to upper-class status.  Compare that with policies that try to pull millions of workers out of poverty to bear subsistence.  Which policies win the moral argument?

And what about the formation of horizontal (same industry) conglomerates, which by the laws of oligopoly produce at a lower level than under perfect competition and, hence, have less demand for labor, which in turn suppresses wages.  Let’s also ask this question.  If the current minimum wage is above what firms can afford, why then the slice of profits in our national income pie keeps getting bigger than the slice of wages?  Somewhere in our economy labor must be losing ground and the most likely suspect is the low end of the labor market. 

There is another definition of minimum wage that can set us on a more promising road.  That’s the wage that allows a worker to meet basic needs in shelter, food, clothing, recreation.  It’s what a living wage is meant to be.  This definition is often undermined by attempts to associate the minimum wage with teenagers or college students who just try to supplement their parental allowance, or with those who need only some part-time work that is weakly consequential to their overall wellbeing.  The truth though is that in the US the minimum wage is the only income source to millions of people struggling to have a decent living.  The reality is at the current minimum wage of $7.25/hr an American family of two lives in poverty.

Viewed from this perspective, the minimum wage is very meaningful because it helps sustain the physical and mental health of the lowest paid workers as well as their participation in the labor force.   It also humanizes labor because it shifts the focus from jobs to employees.  As many economists argue jobs are a statistic but employees are the ones who bear the brunt of disruptions in the labor market.  The humane and socially responsible approach then is to decouple the living conditions of a worker from the lowest wage that equates demand and supply.  And this is all the more so in a country of extreme wealth for few.

So, we now come to the crux of the problem.  To have jobs we need to have demand from firms at a wage they can afford.  To fairly compensate labor the minimum wage ought to be a living wage.  The only force that can bridge the gap (when such a gap exists) is comprehensive public policy.  There are several alternatives.  To ensure a poverty-free minimum wage the government could set a living minimum wage and then reduce the total labor cost, at least for smaller firms, through lower charges for programs, like Social Security and Medicare in the US.  Alternatively, the government should provide direct supplemental payments to workers that support a poverty-free living.  We already have such supplemental assistance programs but poverty still persists for millions of Americans.

The main point is that the debate about the minimum wage ought not to be about jobs lost or gained but about working lives and what it takes to keep them out of poverty and with dignity.  This approach then suggests that tackling the minimum wage calls for more comprehensive policies that support the demand for labor but also recognize the value of labor and protect the lowest paid workers from the vagaries of the labor market. 

* If 1.4 million workers lose their minimum wage of $7.25/hr they suffer a total hourly loss of $10.15 million.   Even if they lose more than $7.15/hr (because they are paid more than the minimum wage) it is still highly unlikely for the lost income to overtake the total gain the 27 million workers will enjoy from a higher minimum wage.

In Search of The Common Good

Invoking the concept of the common good as an organizing principle of a society is one thing; trying to define it, though, is a major challenge.  Like the Odyssey, setting out for the common good is a journey full of temptations that can throw you off course, full of risks of making wrong choices, full of adversaries that want to stymie you from ever reaching Ithaca.  Since I raised the concept of the common good in my last blogpost, it’s now time to say a bit more about it.

From early on, the common good has been discussed through two different lenses.  One is that of the individual, the other is that of society.  The first approach defines the common good as the sum total of individual interests.  This is the way the common good is attained through the invisible hand of Adam Smith.  Self-interest and ambition checked and balanced in the marketplace produce the greatest good for society.  Adherence to unfettered markets, however, threatens attainment of the common good not because Adam Smith advocated that self-interest should come free of morality (actually the opposite) but because, as we know, markets can fail, and when they do, they serve neither the individual nor the common good.     

One hundred years after Adam Smith’s The Wealth of the Nations appeared, another brilliant Scot, Charles Darwin published The Origin of the Species.  Based on a false interpretation of Darwinian evolution, HerbertSpencer coined the unfortunate expression “survival of the fittest.”  This became the premise for a very charged individualistic approach to defining the common good.  A good society is one whose members are strong enough to meet the challenges of social survival.  Society should weed out weak and free-loading individuals.  Resistance against social safety nets and welfare programs are modern echoes of the Spencerian principle.   

On the other end of the spectrum, we have the top-down approach that prescribes a common good for all in the interest of achieving salvation or state supremacy.  These are the conceptualizations of the common good by religious zealots or authoritarian political movements. 

It is between extreme individualism and top-down authoritarianism where the search for the common good becomes most challenging because it requires that optimal balance that can be so elusive.  In this tradition, the common good is realized in societies and states where there is a mutual interdependence between the interests of the individual and those of society.  For Aristotle* (considered to be the father of the concept of common good) the good society is one that enables its members to realize their full potential.  The common good is attainable only through the society and yet it is individually shared by its members.  Each person should take ownership in the attainment of the common good and contribute to its enjoyment by fellow citizens since enabling everyone to realize his/her potential is the essence of the common good. 

This conceptualization of the common good makes it the shared responsibility of the citizens and the state.  Realizing one’s potential depends on the means and opportunities to which one has access, and, hence, how a society is organized.  It is here that a modern philosopher, John Rawls, has made an intriguing proposition.  Rawls invites each one of us to go behind a veil of ignorance and forget who we are, male or female, privileged or not, well-connected or not, physically or mentally gifted or not, and then choose the social organization within which we would like to live.  That would determine then how a good society ought to be organized so that even its least fortunate and weakest members have a fair shot at realizing their potential and share in the happiness of life.  It is the value of potential self-actualization and preservation of dignity even for the weakest of us that elevate education, health and avoidance of poverty to legitimate rights and part of the common good.    

Attainment of the common good comes with the surrender of some private benefit or freedom of choice from each one of us.   Therefore, it is important to show that attaining the common good is worth this loss.  It is easy, for example, to see how a common defense or public roads system provides private benefits.  It may not be as easy though to understand that public financing of education generates private gains for all.  Only when the desire to attain the common good becomes part of the cultural fabric of a society, individuals count it as a source of satisfaction besides their own private accomplishments.

Charity and morality have been used for millennia to motivate people to subscribe to the idea of common good.  But practical wisdom also needs science to draw the circle of common interests and how to manage them.  It is the science of evolution that has shown us how sociality has enabled humans to survive and become a more resilient species.  It is science that is alerting us to the risks of climate change.  It is science that exposes the harmful effects of poverty on the cognitive and psychological growth of children.

The unflattering fact in the search for the common good is that it takes a common threat or an unbearable indignity to make us coalesce and form a more socio-centric worldview.  In the last century, it took two devastating world wars and an economic catastrophe with their respective fears of death and hunger for people to become more aware of their common destiny.  It took the indignity of racial discrimination in America to enact laws to protect the civic and voting rights of Black Americans and other groups in the sixties.

But it took only twenty years to fall back to the individualistic conceptualization of the common good here in America.   Allowing the rise of stark inequalities in economic outcomes, health care, educational attainment, child care, as well as our divisions in handling the risks of the pandemic and understanding the climate challenge are witnesses of how far we have veered from the sense of the common good.  

The common good is more than individual freedom and civil rights.  Actually they are both in peril without a social compact that gives citizens the basic means and opportunities so that they  come to accept certain interests as common and worth striving for.

*Aristotle’s common good comes with the caveat that it was not all-inclusive.  It was only in reference to the interests of free male citizens at the exclusion of women and slaves.

Economics for The Common Good

I have borrowed this title from a book written by the French economist Jean Tirole, winner of the 2014 Nobel Prize in economics.  Tirole’s goal is to show how a society can use the discipline of economics to pursue its common good whatever that is.  It’s like saying, let’s show how we can use the laws of aeronautics so we can fly from here to there.  In other words, Tirole reminds us that economics is a means to reach an end, not the other way around. 

That’s important because many, whether by ignorance or calculation, identify economics very narrowly with institutions and practices that advance the interests of some people and ignore or hurt the interests of others.  It’s a social loss that most students leave their secondary education with little understanding of economics.  This limited knowledge is very responsible for the rise of populist economic ideas or the support of policies that worsen instead of improving the economic interests of the society. 

Although economics can provide more informed and efficient answers to many practical problems the road to employing it in the service of the common good is full of challenges and tough choices.  Knowing how markets work, being able to design economic contracts that optimize the interests of sellers and buyers, and having answers for the economics of climate change and the digital economy do not necessarily take us to the common good.

To grasp the potential and the limits of economics as a means to serving the common good, first, we need to understand the role of the market and the state.   Tirole reminds us that markets are mere mechanisms of exchange without an a priori purpose to serve this or that common good.  They have no inherent morality of their own; nor do they and by themselves produce the distribution of gains a society prefers.  Market failures and outcomes rather reflect the moral values of societies and the market rules they set. 

The economic roles of the market and the state are not mutually exclusive; they are instead complementary.  We rely on the state to guarantee contracts and property rights; to keep competition fair; and correct market failures.  If we were very honest and had all the information we needed, transactions would be fair and the state would have less of a role to play.  Adam Smith believed that self-interest would make good markets for both sellers and buyers.  But often, self-interest veers into exploitation of other market participants.  Thus, a bank may engage in reckless lending and fail to redeem the savings of its depositors.  Or a firm deliberately withholds vital information affecting the value of its stock and bonds.  In these and other cases where behavior and information are important, state regulation is the necessary remedy.

Just as the market is open to failures so is the state.  Political power can enable special interests to capture the state authorities that set and enforce regulation and hold individuals and firms accountable for the consequences of their economic actions.  The winners and losers of an economy are often determined by the political power special interests and groups can wield.

The main actors in markets are business organizations which operate under different organizational forms.  They may be non-profit entities, simple proprietary firms, cooperatives or corporations.  Each form serves the interests of a distinct set of stakeholders, the most dominant being the shareholders.  But do their interests serve the common good?  How do we align the interests of these organizations with the common good? 

Pursuing the common good is not cost-free.  We need to decide how the cost of negative externalities (like pollution, displacement of workers, community decline) are to be shared between private business and the state.  Society as a whole can also produce unwelcome externalities.  The more innovation-intensive and globalized a society prefers to be the more turmoil will prevail in its industrial and labor fabric.  The more individualistic a society is the more economic inequality will exist.  Again, the question is whether a society will ignore any negative effects of these choices or it will serve as a shock absorber and stabilizer.  The more of the burden falls on the state the more willing we ought to be to pay higher taxes. 

The group most affected by the structure and performance of an economy are the workers.  Tirole argues that a good economic policy should prioritize employees not jobs.  Since we have very little control over jobs it is the workers we need to protect as firms, industries, even the whole economy transition to a new phase.  In the US, we have learned the hard way the costs of lacking a sound transition policy as the pace of offshoring of jobs intensified through the 1990s and beyond.  The anxiety of workers (coal miners for example) in industries in decline has a lot to do with this lack of a transition policy.

Tirole stresses that “Economics is not in the service of private property and individual interest, nor does it serve those who would like to use the state to impose their own values or to ensure that their own interests prevail.  . . . Economics works toward the common good: its goal is to make the world a better place.”

But after the impartiality of economics toward the market and the state is established and its dedication to the pursuit of a better world has been declared, the challenge of what a better world is still bedevils us along with the question of how we get there. 

As argued above, pursuing a common good requires that we accept a tradeoff.  Scandinavians trade high taxes for state services in education, health and retirement benefits.  The French trade stubbornly higher unemployment for job protection.  Many countries have minimum wage laws even if this may mean some unemployment for low-skill workers.  In the US, belief in the primacy of markets and private enterprise foreclose initiatives for universal health insurance.

Tirole’s book makes a persuasive case for the analytical rigor of economics and its ability to guide us toward more optimal solutions.  But at the end of the book our quest for the common good is still elusive.  It’s like we have been given a perfect airplane but we now have to choose our destination.   For this we need more than economics.

Child Poverty Is Everybody’s Problem

It is very encouraging and promising that there is bipartisan movement to seriously address the child poverty scourge in the US.  When I check international data to find where the US ranks in indicators like child poverty, I feel compelled to check and recheck the numbers and consult different sources.  I do this because I find it difficult to believe that a country that rich ranks so low in taking care of its young people and future promise.

Let me say at the outset that there are different estimates of poverty, and child poverty in particular, so that one can come up with different numbers and international rankings.  For example, research out of the American Enterprise Institute disputes the US numbers used for international comparisons and contends that the US ranks close to other similar countries like the UK and Canada. 

Even so, in a country of extreme inequality, you can have mild national averages for a socioeconomic indicator that hide the very precarious state of considerable segments of the population.  Even after one adjusts the poverty levels by counting various government programs, the fact remains that there are pockets of significant child poverty in the US.  For example, the Children’s Defense Fund reports that one in six children live in poverty in this country.  The ratio is one in three for Black and one in four for Hispanic kids.  Across the US, child poverty rates are significantly higher in lower- income states and states with significant numbers of people of color.  Thus, even California and New York State have child poverty rates above the national average despite their overall prosperity.

The consequences of child poverty are grave in terms of economic impact, social mobility, health, cognitive and emotional development, and, of course, social adjustment and crime.  The Children’s Defense Fund estimates that the effects of child poverty amount to a loss of $700 billion of annual GDP.  Social mobility studies utilizing the Intergenerational Earnings (IGE) elasticity index have found that 50% of the earnings of American adults depend on the earnings of their parents.  It means that half of the adult earnings of a child born into a poor household are predetermined by the low earnings of the parents.  Unless we believe that foregoing part of a person’s potential for economic and social attainment is not a waste or does not matter for social harmony then we must have no difficulty acknowledging that investing in children can give a society the biggest payoff.

It is important to understand that the failure of fulfilling one’s potential in economic and social attainment is the result of what poverty does to a child’s cognitive and emotional development.  The effects are the product of interactions of genetic and environmental factors that affect the brain and health of the child.  Adverse environmental conditions include poor nutrition and health care as well as problematic family and social situations.  

Thus, child poverty is very relevant to one’s adult life.  Two kids born with very similar genetic predispositions can have dramatically different adult lives.  The kid born into a favorable economic, family and social environment is a lot more likely to be successful later in life than the kid born into poverty and adverse family and social conditions.  Ignoring the effects of childhood experience on adult life impacts how we perceive and, more importantly, attribute success and failure in adulthood.  A much higher percentage of Americans than Europeans attributes success in adulthood to personal effort and merit and more Americans than Europeans also agree with the notion “people are poor because they are lazy or lack determination.”  When we fail to understand the link between childhood poverty and adversity and adult life, we are more inclined to oppose public support programs for adults.

The developmental effects of poverty come from the fact that the frontal cortex is the part of the brain still developing during adolescence and the last part of the brain to reach full development.  The frontal cortex is important for executive functions and regulating our emotions.  Any impairment in its development impairs cognitive and emotion maturation.  It is known that poverty and adverse environment during adolescence can adversely affect the development of the frontal cortex.  Because of its late-stage development, the quality of the frontal cortex is less dependent on genes and more on the environment and nurturing.

Studies have shown that childhood adversity, including poverty, raises the odds for depression, anxiety, substance abuse, impaired cognitive capabilities, impaired impulse control and emotion regulation, antisocial behavior, and troubled relationships.  In other words, born in poverty means you have a lot more barriers and challenges to overcome in order to succeed.  Since child poverty implies household poverty, inferior prenatal care and maternity conditions also contribute to possible problems in the development of the brain.

The negative effects of poverty on the physical and mental health of poor people are also accentuated under conditions of inequality.  What has been found to really matter is not the condition of being poor but rather the condition of feeling poor. Children growing in poor households and neighborhoods become aware of their low socioeconomic status and this further contributes to their uneven development.

Beyond these negative effects afflicting American children, we need also to account for inequality in educational attainment due to more limited resources in school districts attended by poor children.  Therefore, the extent of the problem of child poverty is serious and complex.  Making progress in the war against child poverty requires investments that support robust educational opportunities and outcomes, good nutrition and good health care.

If we look beyond the US, the good news is that extreme poverty (which, of course, affects children) has declined from 50% of the global population in 1966 to 9% in 2017.  This is a tremendous improvement for mankind.  A lot of this achievement is due to the lower number of births per mother from 5 in 1965 to 2.5 in 2017.  With fewer children a household can take better care of its offsprings.

A good society is not one that neglects its most vulnerable members.  The challenge for America is to become a global role model in line with its status as the richest country.

The Crumbling Wall of Separation

The United States has no formal religion.  It has no religious test for public office holders or an oath to divine authority.  And its Constitution (the First Amendment in particular) prohibits the state from favoring any church establishment.  If, however, you came from abroad, unfamiliar with this country, you would have to be excused for mistaking America as a country engulfed in symbols and practices more commonly associated with states steeped in religion.

When you convert your foreign currency into dollars you will see the phrase “In God We Trust” emblazoned on its currency.  If you attend a public event, you will hear people pledging allegiance to a country “under God,” and if you witness the oath to public office you will hear it end with the words “So help me God.”  None of these religious manifestations existed at the creation of the United States consistent with the letter and intent of the constitution to keep matters of faith and state separate.  Had we lived back then, we would have noticed a lot of religious fervor and widespread practicing of religious duties among those early Americans, but no overt signs the state and its civil servants were out to promote any religious creed.   

The fact that in this and other ways the US has moved away from its founding agnosticism is the second paradox about religion in today’s America.  In this, as in other respects explored in the previous blogpost, America differs from the countries its first colonists sailed away from to escape religious persecution and wars centuries ago.  What we are observing is that the separation of church and state is more and more interpreted by religious zealots as a way to keep the state out of religion, conveniently ignoring that the reverse is also part of this constitutional arrangement.  The more intense forays of religion in the “public square” are all the more interesting when we consider that the fraction of Americans affiliated with religious establishments, including Christian churches, has been shrinking.

Contrary to the complaints of religious activists that religious liberties are under attack, religious freedoms are very well protected and well-coordinated litigation and political pressure have actually blurred the lines of separation between church and state.  No longer religious establishments are excluded from the allocation of public funds, even if they can be used to support direct religious activities (that’s exactly the case with the Paycheck Protection Program of the Covid-19 relief law).  Service to customers can be denied on the ground of freedom of expression and religion.  Health insurance coverage for contraceptives can also be denied to employees for religious reasons.  Government funds cannot be used for abortion, despite its legality.  At the behest of religious organizations, Republican administrations routinely deny aid to foreign agencies engaging in reproductive and abortion-related services to poor people.  The Trump administration went even further with its “Conscience Rule” that would have allowed medical professionals to deny care on the ground of religious or moral beliefs.

In general, we observe that religious activism has a two-progue objective.  One is to strengthen the influence, if not the grip, of religious interests on judicial and government authorities.  The other is to shape the moral landscape of Americans in ways that conform to certain Christian beliefs.  Actually, the first objective is motivated by the second, which is also the one that should have us all worry because of its political and constitutional consequences.  A little history here is instructive.

For its first three centuries, operating within the Roman Imperium, Christianity grew on the strength of its moral and spiritual message with no state support.  Once, however, it was declared the official religion by Emperor Theodosius the Great at the end of the fourth century, Christian leaders sought to erase any religious competition.  By wining over or waging war on pagan rulers, Christianity succeeded in becoming the official religion through-out Europe.  This method of expansion to new people under the aegis of the state continued in the centuries of explorations and colonialism.  Evangelizing to others has been a time-honored mission of Christian churches. 

But capturing the state and using it as a tool to force on others the morals of any faith can undermine the principle of religious tolerance and eventually even the principle of democratic life.  No other block of American Christians has done this with greater determination than Evangelical Christians.  Despite his serious moral flaws, denigration of women and people with disabilities, and his harsh policies in treating immigrants and Muslims, Evangelicals, and especially white evangelicals, embraced Donald Trump as their champion and even savior in an almost messianic way.  In their desire to continue with a political regime that promised to advance their moral and religious agenda they reached the point to forswear their allegiance to the democratic governance of the country by becoming perpetrators of the ‘Stolen Election’ lie. 

What is more worrisome, however, is the willingness of politicians and even of a whole party, i.e., the Republican Party, to reciprocate the embracing gestures of the Evangelicals.  Today, Evangelicals comprise the single largest religious block of the Republican Party.  A 2019 survey revealed that 78% of Evangelicals are registered Republicans compared to 56% in 2000.  This strong party loyalty of Evangelicals is explained by the entreaties they see coming from Republican politicians.  Besides Trump who assured them that “God is on our side,” former Secretary of State Mike Pompeo had declared himself as a “Christian Leader” on the homepage of the department’s website.  Other Republicans tooting their loyalty to Evangelical priorities are Mike Pence, Ted Cruz, and Josh Hawley, all of them with presidential aspirations.

This party symbiosis with a single religious block is entirely new, at least in its intensity, in the recent history of American politics.  It is more reminiscent of those past alliances of political, government and religious leaders that led to intolerance, strife and violation of the political and civil rights of opponents.   The politicization of religion, if it continues, it will gravely challenge the future of the American Republic as a multicultural, multi-faith, and open polity.  The end result will no longer resemble anything the Founding Fathers had in mind.

These trends, I believe, should have all democratic-minded Americans worried, irrespective of religious or secular beliefs.  White Christian nationalism taking root in American politics is not just a paradox in a country in which the things that are Caesar’s ought to be separate from the things that are God’s.  It is rather outright dangerous, and, yes, un-American.

The Religion Paradox in America

One of Thomas Jefferson’s most prescient arguments for the separation of church and state was that left alone to fend for themselves religious establishments would gather strength from the solidarity and dedication of their members instead of growing complacent under the aegis of the state.  By arguing for separation of church and state, Jefferson (and his fellow Virginian James Madison) also hoped to distance the state from religious rivalries. 

More than two centuries later, Jefferson’s argument appears to have been fully validated.  The US has strong and thriving religious establishments of all creeds and religion is more prevalent in American society than in almost any other advanced industrialized country.  On the other hand, the expectation that separation would keep the state out of the encroachment of religion has hardly survived the test of time.  (About this in my next post.)

Let’s start with religious adherence.  According to a 2018 survey, 41% of American Christians attended church services at least once a week, far ahead of their coreligionists in Western Europe.  A Pew Research Center survey also revealed that religion was more important in the lives of Americans than in the lives of Western Europeans.  When examined within the United States, these religious indicators are stronger in conservative than liberal states.  So, the question arises as to whether the more intensive religious commitment of Americans is matched with an equally strong performance in various social indicators that reflect the influence of moral and hence religious precepts.

To answer this question, I checked various international statistics of recent years.  UN data show the US with 20.8 abortions per 1000 women, higher than in the more secular countries of Western Europe.  Do more religious states have lower abortion ratios (abortions per pregnancies) in the US?  The answer is yes.  Is this though due to religious attitudes or stricter restrictions in these states?  The evidence I found suggests that abortions do not bear significant relationship to religious creed in the US, whereas an international study revealed that abortion rates are lower in countries with more liberal policies toward abortion.                     

What about divorces and out-of-wedlock births?  In both, the US ranks ahead of almost all Western European countries.  Within the US, divorces and out-of-wedlock births are in general higher in the South, South-West and the Mid-West than in the more liberal states of the North East and West coast.

Next, I looked at suicide and drug death rates.  U.N. statistics show the US is ahead of Western European countries in both causes of death.  With 314.5 drug deaths per 1 million, the US is far ahead of second-place Sweden with 81 drug deaths.  CDC (Center for Disease and Control) data show that both suicides and drug deaths are higher on average in the South, the Mid-West and the Rocky Mountain states.  New York ranks 23rd with fewer drug deaths than 21-place Florida.  West Virginia is number one in that sad statistic.

Poverty and incarceration are two social ills which are closely related.  The 2019 survey of OECD (Organization of Economic Cooperation and Development) places the US number 35 out of 37 developed countries in overall as well as child poverty rate (that is, 34 countries scored better).  The US is also the world leader in incarcerations with a rate of 665 per 100,000 persons.  Poverty rates are higher in Southern and South-Western states and incarceration rates are higher in Mid-Western and Southern states, regions ranking higher than the national average in religious adherence.

These results point to a paradox about religion in America.  Despite greater religiosity and closer affiliation with religious establishments, Americans do not seem to perform better than countries known for their secular culture and politics.  More tellingly, even within America, states known for their religiosity do not seem to perform better than more liberal states.

What do these findings tell us?  Do they mean that stronger religious attitudes lead to worse moral behavior?  Can we argue that Americans are more morally challenged than the more secular societies of Western Europe? 

First let’s put to rest one claim often heard from religious people.  Namely, that religious affiliation leads to a more moral life.  This has been an old canard against atheists, agnostics and secularists, in general, without though any factual basis.  For example, in a speech given at the University of Notre Dame, William Barr, the former Attorney General of the US denounced secularists for “moral chaos and immense sufferings, wreckage and misery” in the US.  The above findings instead show that many of the serious ills of American society originate in states with greater adherence to religion.  This association has been already established in the past.

But equally unfounded would be the claim that religious people are less morally inclined than others.  What if behind the association of moral outcomes and religiosity are other factors that explain this correlation.  Such, well-established, factors are less education, poorer economic and job conditions, and inadequate public services.  The statistics I looked at are better in Western Europe to no small degree due to wider and stronger safety nets that result in less poverty and social alienation.  These conditions then have a mitigating effect on poverty, crime, and suicides.  Better drug rehabilitation programs also result in lower incarceration rates and drug deaths.

In the US, some of the worst statistics are reported in more religious states which also happen to have significant pockets of lower educational attainment, weaker economic conditions, lower quality jobs and insufficient public services.  Many of these are the states where the “deaths of despair” have surged in the last 25 years (as explained in an earlier post).

What are then the really important conclusions we can draw from this analysis?  First, the virtue wars between religious and secular people are entirely futile and counterproductive.  Second, the road toward better societies is through public policies that produce better educated citizens with more opportunities for economic advancement and greater support from the state in coping with the vicissitudes of personal life.  

What Brexit Really Means

While America was gripped by the double anxiety of a raging pandemic and the desperate and unlawful attempts of an outvoted president and his die-hard and misinformed supporters to cling on to power, the world also witnessed another sobering event, the Brexit.  Great Britain at long last was leaving the European Union, making the British channel again more than a mere geographical divide.

I call Brexit a sobering event because to me it is one more reminder of how difficult it is for humanity to build inclusive and enduring bonds and stay together.  The tendency toward fragmentation reminds me of the biblical story of the tower of Babel.  Men and women worked together to built it.  But then as they were coming close to their goal, God decided to give them different languages.  Cooperation became impossible and the project collapsed.  Humankind would splinter into different factions, each going its own way.  The fact that God’s will was the culprit of this fragmentation does not make it any less unfortunate and over time destructive.

In today’s world, the role of a religious God is played by a host of humans playing god, equally determined not to let humankind come together.  These human gods take the form of ambitious politicians or selfish business people.  Fragmentation, that is, the “Us” versus “Them” divide becomes to some the road to power and treasure.  Such gods wrought Brexit by telling an anxious working class of Britons lies and half truths about Brussels bureaucrats, hostile immigrants and the promise of renewed glory for Old Albion.

The move to an untethered Great Britain harkens back to the idea of nation-state.  The idea that a country with greater national, religious, and cultural cohesion is a more effective administrative unit.  But the record is mixed.  The Greek city-states thrived as independent entities while external threats were effectively managed.  But disunited they eventually fell to the armies of Macedon.   The disparate Hellenistic kingdoms became renowned cultural centers, but they, in turn, succumbed to the power of Rome.  The Roman Empire, first based in Rome and then in Constantinople, the Holy Roman Empire, the Ottoman Empire, the British Empire and the Soviet Union all ruled over dozens of people with different ethnicities, languages and creeds. 

All these (and other empires) were militarily strong, kept land roads and sea lanes free, and protected their peoples from foreign enemies.  They did unify large parts of humanity but under autocratic rule that did not always respect the rights of different ethnic and religious groups.  When dogmatic religious or political ideologies prevailed, these empires would also squelch cultural, intellectual, and artistic creativity.  When years ago, I read Jacob Burckhardt’s history of Renaissance in Italy, I could not help but realize how Renaissance blossomed out of the independent city-states of Venice, Florence, Padua and Genoa which let the arts and letters thrive by fending off Rome’s Papal power.  Soon after that, the creative explosion of Renaissance emerged not in the cities of the Holy Roman Empire but in independent and more democratic Holland.  A century later the political, social and commercial preconditions that led to the rise of free markets and capitalism first took hold in England, not the rigid multi-ethnic monarchies of continental Europe.

So, the lesson of history is that large state conglomerations project power and stability but often stifle individual rights, creativity and innovation. Nonetheless, this is not an argument one can raise in defense of Brexit.  Great Britain is not escaping an autocratic empire.  It leaves a union of democratic states each with enough autonomy to foster creativity and innovation, and all dedicated to civil and individual rights.  The European Union is the first experiment in history where independent democratic countries decided to cede some of their sovereign power in the interest of pan-European peace and a common future.  If the concept of the nation-state after the Westphalia Treaty of 1648 was the right solution to bring an end to the religious wars of Europe, equally consequential was the Treaty of Rome in 1957 that established the European Economic Community as the solution to putting an end to disastrous intra-European national conflicts.  It is against this bigger purpose any cost and friction of a unified Europe must be stacked against.

It is, therefore, from this big-purpose project Great Britain is walking away.  And what an irony this is!  The same Great Britain that had no qualms about ruling over half the world in the name of the Crown, it is now the country that balks at a European order in which it had an equal voice, a voice it deprived of its imperial subjects.

Around the time Great Britain was embarking on its empire-building project, here in America, a newly independent country was embarking on a novel experiment of forming a multi-ethnic democratic state within its borders.  Unlike the British project of joining foreign people from all corners of the globe under British rule, the American experiment was to become the home of people from around the globe governed by a constitution of the people.

As it happens with all undemocratic empires, in time, the people that made up the British Empire split off in order to pursue their own national destinies and Great Britain itself retreated to its geographical and national borders.  That’s a devolution not open to America.  Here we are destined to live together – multi-racial, multi-ethnic, multi-cultural and multi-creed.  We have no internal borders behind which we can retreat and live in racial, ethnic and religious purity. 

That’s why the trends of racial friction and the rise of religious and white nationalism we have seen in recent years should be sobering to all Americans.  The American like the European Union project is to teach people the possibilities of “We” in contrast to the fear of “Others.” 

So, to me Brexit means walking away from building a “We” world just like the splintering of Americans by race, creed, or any other divisive idea is walking away from the original American project of building One out of Many.