From AI to UBI?

“AI cannot credibly justify a radical policy like UBI at this point. Misunderstanding AI adds to a natural fear of the unknown though we know that technological change always proves beneficial to the economy and that society will have ample resources to aid anyone who might be displaced by AI in the future. People should…

In 2008-9 and 2020-21, the governments of America and many other nations put in place policies based on a panicky fear of contagion rather than reason. Nobody can afford a “threepeat” performance, yet some policy pundits use mounting irrational fear of emerging technologies like artificial intelligence (AI) to urge immediate implementation of unprecedented universal basic income policies (UBI). 

AI is not about to render humans jobless but even if computers eventually do take over most jobs, that will not mean that humans will be without income so long as they maintain their economic freedom, especially their right to make a living, be it from proprietorship, financial investment, subsistence activities, or, perhaps, modified forms of employment. All humans can flourish without resorting to massive income redistribution and Bernie Madoff-like accounting legerdemain.

Although the most common way of making a living in rich countries since the postwar period, not all paid work must be, or traditionally has been, done via employment, the terms of which have constantly evolved to meet changing business, economic, and technological conditions. 

Pessimistic predictions often prove popular but a quick review suggests that many people prefer existential angst over rational optimism. Most tellingly, perhaps, the world was supposed to end in 66 AD. Then in 365. Then in 375, 500, 793, 800, and 1000, to name just a few predictions from the First Millennium of the “common era.” 

But that poor predictive record did not dissuade somebody or other from predicting the end every 30 years or so throughout the Second Millennium, including in 1901, 1914, 1941, 1975, and 1999. If anything, the pace of pessimism has quickened in the Third Millennium, with catastrophe awaiting the planet almost every year from 2001 until 2020, not just in infamous 2012. The world looks a bit ragged of late, but it’s still here and in some ways has never been better

Some policy experts assert that they cannot just predict the future, they can actually thwart its worst aspects, if only given enough money and power. The future is going to be really bad, they claim, but they hold the cure, if only you will pay them sufficient tribute. As Public Choice theory predicts, however, most leaders leverage fear to gain power without supplying an actual antidote.

The bailouts that followed the Panic of 2008, for example, were implemented on the grounds that bankruptcies would spread throughout the economy like a computer or biological virus if governments did not save large firms in financial trouble with taxpayer money. They credited themselves with stopping a second Great Depression even though their own policies undoubtedly helped to cause the crisis. Unfortunately, we will never know what would have occurred if those vast sums had not been shifted from taxpayers to risky private businesses. No kidding, the economy may well have rebounded much faster than it did.

Despite their horrible track record, some prognosticators still expect people to believe that 1) AI will displace the majority of jobs presently and 2) the only sensible response to a massive job loss is to implement UBI right now. Neither claim stands scrutiny.

First, the economies of rich nations continue to create jobs, pretty much in direct proportion to their Economic Freedom index score. In the U.S., unemployment (in states not in lockdown) remains low and millions of jobs go unfilled each year. Numerous past technological shocks have increased per capita output, productivity, and real compensation (wages and benefits adjusted for inflation), not destitution. That’s because when businesses are free to innovate, workers displaced by machines (or overseas workers or anything else but lockdowns or other forms of diktat) become available to do new kinds of work, in new places and ways, hitherto economically unwarranted. 

And in free countries, those unsuited for employment find other ways of making a living, from owning their own businesses, to living off financial and real estate investments, perhaps supplemented with subsistence activities (doing things for themselves instead of buying them in the market). 

A relatively few people may receive unilateral transfers from charities and/or the government but traditionally most have sought to maintain as much independence as possible. Just because everyone nominally receives the same number of dollars from the government (i.e., taxpayers) every month doesn’t mean that net recipients, those who receive more than they pay in taxes, will not be stigmatized by those who pay more than they receive, or feel the same shame that welfare recipients often report.

Finally, despite some impressive capabilities in some areas, AI ain’t all that. Cobots (collaborative robots and bots and other types of AI-based software systems) and such are developed as assistive technologies and thus are economic complements rather than substitutes. In other words, humans will use AI as yet another tool, much like they have used Acheulean hand axes and satellite-controlled tractors to do more work in less time or with less energy.

A general purpose technology (GPT) causes rapid increases in productivity that spur significant and widespread impacts on society and the workplace. It may also generate numerous more specialized complementary innovations and technologies. Like lithic tools, the steam engine, or the Internet, AI is a GPT.

Current developments in AI point to several changes in the world of work. As occurred in response to previous GPTs, some jobs will become obsolete while others will transform. Though some change is certain, AI’s exact impact on the future of work remains unclear. Some researchers connect the adoption of AI and robots to reduced employment and wages, suggesting the need for UBI adoption. According to studies conducted by McKinsey, PricewaterhouseCoopers, and Skynet Today, AI will displace about one-third of the existing jobs worldwide within a decade, with the United States (up to 40%) and Japan (50%) among the hardest hit. 

Others, however, predict the contrary. According to the OECD AI Policy Observatory and Beyond Limits Study, AI will create more jobs than it destroys. Companies pioneering the development and scaling of AI have thus far not destroyed jobs on net. Moreover, evidence from companies that not only implement, but also scale AI, suggests that reskilling is more prevalent than layoffs, none of which are foreseen in the short- or middle-term.

Job change rather than loss will occur because a job can be viewed as a bundle of tasks, some of which offer better applications for technology than others. According to David Autor, who specializes in work automation research, both managers and researchers should think in terms of task replacement rather than unemployment. Some high-skilled professionals such as engineers, radiologists, or lawyers are at risk because most of the tasks they perform can be done by AI. Such highly educated professionals, however, may also be capable of applying AI in a way that fruitfully complements their work. 

In short, AI cannot credibly justify a radical policy like UBI at this point. Misunderstanding AI adds to a natural fear of the unknown though we know that technological change always proves beneficial to the economy and that society will have ample resources to aid anyone who might be displaced by AI in the future. People should concentrate on how AI can automate mundane tasks and stop technological fear mongering. Your job, like the world, will still be here tomorrow.



Post on Facebook


Post on X


Print Article