A look at why you should care and what, if anything can be done to tip the scales one way or another
Part One. An Overview of Artificial Intelligence
I’ll posit that the three current horsemen of the apocalypse are the rise of fascism, growing wealth inequality and climate change. Artificial intelligence already has a presence in every area of our lives. Unless decisions about the implementation of this technology are made and made quickly, AI may help provide solutions or it may become the fourth horseman.
Our collective response to AI can be measured by the fact that mainstream media and social media have been covering it largely as a business story and a fait accompli. When coverage strays beyond business, it’s treated the way any quirky news story is—narrow and not deep: “Federal magistrate judge weighs ordering school to induct teen into National Honors Society after accusation of using AI on project;” “Polish radio station replaces journalists with AI avatars;” “AI blamed for teens death;” “Is ChatGPT democratizing cheating?” There is some coverage of the loss of privacy, potential discrimination and some small discussion about the effect of AI on artists.
This sort of coverage both influences and is reflected in polls which show that, at this point, Americans are distrustful but hopeful about AI. Recent polling from the AI Policy Institute shows that 72 percent of American voters want to slow down the development of AI, compared to just 8 percent who prefer speeding up and 82 percent don’t trust AI companies to self-regulate. Given how little Americans know about this technology, this polling data is about as valid as a poll showing our attitude toward the CERN particle accelerator.
Not only is the whole business technically complex, but the word “technology” itself seems inadequate. AI is not a product; more like an electronic miasma that is drifting into every aspect of our lives. Some AI creators I’ve talked to say “There’s always disruption with new technologies.” Sorry, but this is different. Previous new technologies were self-contained: a telephone, a radio, a TV. AI relies on and is most similar to a computer and, in a way, is the culmination of a process of ceding control in the digital domain that has been underway for a while. For example, once it was important to know how many and what kinds of ports your computer had. But no more. The elimination of ports means the inability to easily attach peripherals—especially external hard drives. Which means that more and more of our work is being stored in “the Cloud” (a terrific branding term for “someone else’s computer that you are now renting space on”).
But the most important difference between AI and previous technologies is that none of these previous technologies could teach themselves; none could evolve without human intervention. AI can and does and will continue to do so faster and faster.
There was an expected upward curve that AI has, in the last year or two, suddenly and dramatically overshot. AI is writing and implementing its own code and teaching itself research-level chemistry. If you ask it nicely how to get the electrical grid down, it can pinpoint the key power plants and tell you how best to sabotage them.
There are uses to which AI can be put that seem unassailably positive, such as sifting through enormous amounts of data to look for solutions to problems in science and in medicine that can benefit many people. But even here there are potential problems with privacy and false data that can’t be ignored.
AI has entered the political realm, as it inevitably must, if only because AI is being implemented in agencies that deal with “national security”-cyber war (NSA) and regular war (the Department of Defense is investing billions of dollars and making organizational changes to integrate AI into their war-fighting plans). Also, bad actors using AI comprise a considerable potential threat.
The question is whether our cyber masters will control the political debate as effectively as they have for the last 30 years. Some discernible group that includes experts and the public has got to arise in opposition to the unbridled imposition of this technology into our lives. If it doesn’t, there will be no agency equivalent to the FDA empowered to test new AI products for bias and stop them being put on the market. There will be no transparency about the enormous power and water usage of this industry. There will be no compensation to those whose creative work is being swallowed whole to “train” AI computers.
In my next section, I will go more deeply into the risks of employing this technology with our eyes closed. My recent immersion in the subject has awakened a still small voice that says: “Abandon hope (or hype) all ye who enter here.”
Part Two: Getting Specific About Problems
Any discussion about AI needs to begin with the understanding that the material AI is “trained” on is reflective of our world and our prejudices, so problems like racism, sexism, ageism and xenophobia are baked into the process.
As I noted in my last post, “hard” science and medicine appear to be the least nettlesome areas in which AI can be applied. Here, we are apparently dealing with evidence-based processes and boards, committees and organizations have been in place for some time to discuss and try to determine ethical limits in science. However, grave issues loom even here.
One is misdiagnosis. With companies racing products to market, poorly trained or biased algorithms could lead to misdiagnoses, particularly in cases involving underrepresented groups or complex conditions. Secondly, selling health data is a potential gold mine. The monetization of information has been honed to such a degree that bad actors (i.e., anyone with adequate financial resources) can easily game a system that will be more and more automated.
As an artist, I’d been most concerned and had written about the fact that the onslaught of AI is detrimental to artists and will become increasingly so. This is self-evident. There has been some pushback. The Writers Guild and the Screen Actors Guild have both won small concessions about the use of AI in productions. There’s also been litigation filed by the Authors Guild.
Ultimately, consumers will decide the degree to which human creativity is valued above that of computers. Streaming services like Spotify have insinuated a great deal of AI generated songs into users’ lists, without serious backlash. This ubiquity and acceptance of AI-generated music and art does not make me feel optimistic.
Issues that directly affect more people’s lives include AI’s unprecedented potential for the invasion of privacy. A worker’s every movement, every keystroke and activities outside the job will be easily monitored. Of course, this won’t be a problem if you don’t have a job and millions of jobs will be disrupted because of AI. We are being told that AI will create new jobs, although all I’ve seen are jobs having to do with AI. These are almost entirely white-collar jobs. What happens to all those people who can’t do that work or who would prefer jobs that robots and other machines have taken away?
AI-generated deepfakes can create realistic videos or audio recordings to spread misinformation that can damage reputations, influence political outcomes, manipulate public opinion and incite violence. The degree to which algorithms have already polarized our culture is obvious.
The people working on AI (especially those who have quit key positions in the industry) publicly agree that there are other, very dire scenarios possible. Briefly put, AI has the capacity to find patterns that humans don’t even know exist. It can utilize those patterns to manipulate us to maximize its own survival at our expense, much as we humans have done to our planet. While this is something to keep in the back of one’s mind, I don’t think going into this area too deeply right now would be productive; it’s too easily dismissed by many people as a paranoid fantasy and may make it harder to focus on the immediate issue; like going to the store to buy soap and being so dazed by the hundreds of brands on the shelves you walk out empty-handed. Suffice it to say that leaders in the field believe it could happen.
Finally, there’s the enormous energy expenditure needed to run AI. Microsoft, which has invested in ChatGPT maker OpenAI recently announced its CO2 emissions had risen nearly 30% since 2020 due to data centre expansion. Google’s GHG emissions in 2023 were almost 50% higher than in 2019, largely due to the energy demand tied to data centers. Overall, the computational power needed for sustaining AI’s growth is doubling roughly every 100 days. We have perfect irony here. While AI is being touted as having the capacity to solve global warming, it is using enormous amounts of energy that are bringing the dire consequences of global warming closer every day.
Can anything be done to slow down the AI juggernaut? My next section will reality check the possibilities.
Part Three: Are There Solutions?
The goal of any immediate solution must be to break the cycle of research leading to immediate implementation of AI products. It may be impossible to slow down privately funded AI research, but it’s imperative to slow down the process of introducing new products into the market. If safeguards are to be built into a process as ubiquitous and black box as AI, we need to slow down the process. As a precursor, we need to ask some very tough questions.
Why is it so hard to put “guardrails” around this technology and why do we have to take responsibility for it? First, unless you are completely off the grid, living completely off nature, you will be affected by AI. Second, this is a race for commercial primacy in a multi-trillion-dollar industry and history shows that “corporate self-governance” is an oxymoron. Third, there is no consensus on what proper or improper application of the technology would look like. Fourth, AI is exhibiting so-called emergent qualities—performing in ways its creators don’t understand. How can you put up guardrails around a technology whose parameters you don’t even understand?
AI is being adopted at Mach speed in business. There’s a broad consensus among businesspeople that AI-guided automation will turn red ink into black, so CEOs and CFOs are avid consumers of new AI tools. Job losses are a big part of this process and how many CEOs will think their workers important enough to be included in decisions concerning automation? Can the newly unemployed be mobilized to push back? Barring a strong uptick in union activity and activism, this is a long shot, especially given Trump’s desire to gut the National Labor Relations Board.
Businesses project an enormous customer base for the new toys that their automated factories will produce-better video games, 3-D reality, new avenues to find pseudo sexual and romantic satisfaction, a means of avoiding drudge work and gaming the educational system. As far as I can see, defining limits for the use of AI in making, selling and distributing goods and services (including financial products) is probably impossible; and probably too late.
I do think there are some top-level, less controversial, stop-gap measures that might introduce some air into what so far has been an hermetically sealed chamber.
First, we must institute a “no fake humans” rule. In every interchange with a person, AI must identify itself. This is especially vital in contacts with children.
Second, companies need to be held responsible for what their algorithm does-like propagating hate speech or manipulating information and passing it off as fact.
Third, AI developers must be forced to report their methodologies and energy use to an international institution capable of understanding what’s happening. That institution can then issue reports that are distributed throughout the world.
As far as achieving a more systemic level of solution, I think legislation would appear to be the only option.
To anyone who has watched Congress try to grapple with technical questions, it’s a given that they will not be able to understand large language models and emergent properties. Trillions of dollars are being spent by Bezos, Musk, Zuckerberg, Pichai and a few others to attain dominance, or even relevance in a race for the AI jackpot.
Note that the incoming administration has vowed to strip as many impediments to corporate growth, dare I say hegemony, as it can. It means to gut regulatory agencies and squelch unionism. Will a Trump administration that turns a blind eye to climate change and to open the gates for the Musk-tocracy do anything to restrain this juggernaut? Lobbyists swarm the Capitol and oversight committees meeting with the cyber-titans are unlikely to hold them to account.
I believe the only way to keep this from happening is by building a new army of lobbyists; Americans who care enough and understand enough to create AI guardrails. To accomplish this, we have to raise America’s “legislative IQ.” Although a long shot, AI may be able to help.
As it stands, the process of proposing and enacting legislation is the bailiwick of legislators, their staff and lobbyists. They are all paid to understand issues, draft legislation and understand what’s in the fine print. For an ordinary citizen to research legislative activity and further, to understand it, is far from simple. Partly this is because of the legalistic language and partly because of the lack of transparency of the governmental bureaucracy. What could AI do to help?
There is already some AI-oriented legislation, and I think there are congresspeople who will submit reasonable legislation. AI would be mobilized to analyze actions pending in congress and draft a concise, clear explanation of every AI-oriented bill under consideration. Further, AI would forecast the likely upsides and downsides if a bill is enacted or if it is not. These explanations should be made available to everyone in the U.S. by requiring they be posted on every social media outlet and sent to every “old media” outlet. This a perfect niche for AI, one that could spark a quantum leap in empowerment for citizens.
Obviously, this effort has to be seen as nonpartisan. There are bodies already formed like the National AI Advisory Committee and there are already promising examples of state-level policies that focus on reducing harms from AI bias, algorithmic management, surveillance, productivity quotas, and privacy concerns. President Biden has issued an executive order on AI.
California Governor Gavin Newsome has also signed several bills to make AI producers accountable and transparent. The fact that members of Congress, in an unprecedented move, sent a letter to Newsome urging him to veto the legislation shows the gap between what Californians know and what the rest of the nation doesn’t. If pressure isn’t exerted in DC, Congress will wait until the train has left the station before they take any action. This pressure can be applied by a united, confident bloc of voters who understand that the brakes must be applied to AI before it’s too late.
This won’t happen until we understand our personal and collective responsibility. If millions of people who have something to research learn how much more energy AI uses turn instead to a search engine to find an answer? Will we consciously choose music and art created by people and not AI? Will we choose messy relationships with people rather than satisfy our sexual desires with bots?
Will we allow the relationships our children have with chat “friends” to be more important than those they have with friends and family? Will we force AI creators to decide who facial recognition really serves and whether winning the race to market is more important than de-biasing their software?
These are questions that AI is forcing us to answer and positions it’s forcing us to take. If we remain passive consumers, great harm can arise. If we take an active role and apply appropriate guardrails, there’s no limit to the amount of good this technology can do.