Being right is for woke betas1!!1!1!!!!

Welcome to Hollering with the Armadillo, my periodic foray into doing a yell on the internet about pissy shit. In this edition: I expound upon my Twitter thread about artificial intelligence being mostly exciting to creepo dipshits, tie in some “cancel culture” critique, and use the phrase “key teleological flaw” like a big asshole. Strap in and subscribe!

Screenshot via the New York Times / read the original article here

Despite my best efforts to remain ignorant about it, ~ artificial intelligence ~ has been taking up ever more space in my personal internet discourse bubble for several months now. I have generally thought of AI, when I have thought of it at all, as the profoundly dull obsession of dudes who have not much else going on. There was a minute where my girlfriends and I played around with asking an AI image generator to present us with examples of our personal style languages to hilarious effect — my fashion words, “vintage,” “comfortable,” “quirky,” “bold,” and “fun,” turned up glorpy-faced models with 15 fingers and three legs dressed like kindergarteners at a choose-your-own-adventure dance recital — but more practical use cases never presented themselves, and I lost interest.

In recent weeks, I’ve had the vague sense that I should care more, or perhaps more urgently, about the possibilities/threats of AI, because the exciting/terrifying potential of the tech — per the discourse — is that it can/might/can’t/will/won’t/could/couldn’t replace journalists. But I was also late to giving a fuck about cryptocurrency, and probably still give too few fucks about it, or at least very few fucks about understanding how it works. (Or doesn’t — because it doesn’t work, as it’s mostly a scam, which is what I gather from Molly White’s very good blog.) Crypto and AI have been inextricably linked in my brain as things that boring, rich, and wannabe powerful (and sometimes actually powerful) men are trying to make happen. Who cares? Abortion is banned in a dozen U.S. states! I’m busy!

Maybe because I am a kinesthetic learner, I don’t take easy interest in new shit that doesn’t immediately present itself as relevant to my needs, and I couldn’t imagine what in the world a regular-ass person would use, say, ChatGPT for. I read plenty about what AI enthusiasts said AI could do, and read plenty from AI critics about its dangers and pitfalls, but it remained unclear to me what the hell the point of a pointless thing was either way.

I don’t need a robot built by a bunch of tech bros to use its tech bro brain to sift through information deemed relevant by and for tech bros to report back to me on literally anything. I already have that! It’s called the internet, and the tech bros use it to call me fat.

But this week, Google introduced (under competitive duress) its AI chatbot, Bard, and everything shifted into focus. How? Well, first, Google explained what it intended Bard to be used for, which is broadly: addressing the kinds of questions that aren’t easily answered by one Wikipedia entry or straightforward Google search, due to requiring some degree of informational synthesis. Okay! Finally someone saying, “Here’s literally a service AI could provide and literally how it would work,” instead of some tech bro hollering about ~ potential ~ to raise venture capital. And second, because Google’s Bard announcement promoted a glaring error about the first photographed exoplanet as an example of the great informational synthesis service that Bard will provide.

As Parker Molloy put it: Whoops!

Immediately following an image captioned, “Use Bard to simplify complex topics, like explaining new discoveries from NASA’s James Webb Space Telescope to a 9-year-old,” Google’s announcement includes a paragraph about the importance of making sure “Bard’s responses meet a high bar for qualify, safety and groundedness in real-world information.” Whoops!

In fairness to Google, the company states over and over that this is still just in testing and that it is still making adjustments to improve the reliability of answers, but it’s still a bit awkward to include a wrong answer in the launch example.

The first thing that struck me about this corporate PR gaffe was less the error itself (real dipshit stuff, no argument here!) and more the use cases Google was promoting. If I wanted to find fun telescope facts for a kid, or — in Google’s other example — determine whether it would be easier to learn guitar or piano, I simply wouldn’t trust an AI entity to give me the information I needed to fulfill the ask. Maybe I just like critical thinking, or maybe I overestimate my own ability to know what’s best for myself, but I’d rather collect and synthesize my own information, even knowing that in using search engines (and social media, and what-have-you), the data informing any given self-synthesis is already technologically filtered. But isn’t that: life?

That’s when it hit me: an AI-synthesized answer to an even lightly complicated question is to me both useless and terrifying as a woman. I’m already capable of synthesizing my own information because I’m accustomed to sifting through biased data, identifying and excluding unreliable sources, and drawing my own conclusions. Not because I’m a journalist, or because I have a couple of degrees, or because I’m just curious by nature. It’s because I’m a woman. Biggity bam. Asked and answered. I’m always already aware that the information available to me in the world is more likely than not to have been gathered, created, or presented with men, and/or men’s perceived needs, in mind. (See: the entire field of medical research.) I don’t need a robot built by a bunch of tech bros to use its tech bro brain to sift through information deemed relevant by and for tech bros to report back to me on literally anything. I already have that! It’s called the internet, and the tech bros use it to call me fat.

When I was in grad school for anthropology, I had a number of colleagues in the social sciences who funded their studies by TA’ing mandatory writing classes for the University of Texas’s various STEM (science, technology, engineering, and math) programs. Their eternal complaint: these students simply don’t understand why they’re being asked to write, period, and furthermore, to write critical analyses of texts, and they’re highly resistant to it. At the same time, I TA’d introductory classes in my department, and had plenty of students who bemoaned writing assignments, but none who expressed real exasperation at the fundamental practice of having to write critical textual analyses.

I’m not making a blanket statement about STEM students or STEM professionals — many are excellent, while not universally enthusiastic, critical writers, and at least the ones I’m friendly with (and related to) are exceptional critical thinkers — but the AI zeitgeist seems very specifically fixated on creating critical analysis cheat codes. So I wonder if the race toward AI, and especially AI that generates writing (such as cover letters) and written analyses, isn’t a coming-of-age resistance moment for a STEM-educated cohort of (smart!) people who resent the obligation to master or at least command a skillset that seems to them irrelevant to their natural expertise and/or professional interests when, importantly, they might not have to any more if they can find a way around it.

I don’t not get it. I am the daughter of two math-minded, writing-averse accountants whose careers were made vastly easier and no less lucrative by the advent of Quickbooks. I’m not aware of any serious push by accounting professionals to quash such tools, but perhaps this fight played out decades ago. Perhaps abacus enthusiasts were riled by the advent of the calculator! I mean, I know there are modern grownups complaining about the ~ kids these days ~ not being able to do long division. (So too, we olds bemoan the death of cursive handwriting.)

To the extent that making a hard thing easy or easier is a goal or benefit of technological advancement writ large, building AI for synthesizing information and related skillsets for the public interest — i.e., for doing journalism — is eminently reasonable and even exciting. Same as it’s reasonable to build a graphing calculator, or tax software or, hell, Photoshop. You do those things, and you’re going to have a hell of a lot more people casually doing advanced math, filing their own taxes, or going into the wedding photography business. I won’t and can’t argue with the essential AI premise on those terms.

But the AI zeitgeist isn’t happening in a socio-cultural-political vacuum wherein we’re all just trying to make it to the grave with a minimum of hassle. The AI zeitgeist is happening in the era of “cancel culture,” when people who in any other recent era would have enjoyed unquestioned political, cultural, social, and intellectual authority are now grappling with the experience — and in rare cases, the consequences — of being publicly criticized by those who have historically held less power. The most regressive and vocal critics of “cancel culture” are ideologically aligned with the starry-eyed ~ libertarian ~ tech-bro proponents of AI, making for a simultaneously hoary and futuristic project that automates lazy progress for the privileged at the expense of everyone else.



In this context, what advantage does AI synthesis offer that is more valuable than any given person’s ability to interpret their own research? Well, it offers an important shortcut for those who seek it: the ability to gather input without directly encountering, acknowledging, or appreciating information from marginalized sources. It offers critical analysis cheat codes: the divestment of information from origin, and the introduction of an “objective” interlocutor — an AI chatbot — who can make essential information more palatable to people who would otherwise be disinclined to believe, and indeed inclined to reject, that same information if they knew it came from women, or people of color, or LGBTQ folks, or poor folks, or disabled folks, or or or or or or.

I said it more pithily on Twitter, because I love to whine on a dying platform: the AI zeitgeist is rooted in white men being so worried that they are on the verge of having to trust the expertise of people who aren’t just like them that they would rather get their information from a wrong robot.

Indeed, some men (or men of a type? Hopefully not in general? IDK! This one’s on y’all dudes!) are so excited about the erasive and invisibilizing possibilities of AI that they would rather have their pornography feature AI-generated women than actual human people. Garbage Day’s Ryan Broderick offers a fascinating/revolting and nevertheless on point take:

So I suppose the idea here is to replace human sex workers with A.I.-generated content because these men want to look at sexual content, but they hate the women that make it. They want the machines to free them from the self-loathing of being a “simp”.

And the fact that A.I. porn crushing the human porn industry is compared to journalism is also telling. Because I think for a lot of men, particularly in Gen Z, all of mass media, including pornography, has been successfully coded in their minds as woke and feminine by various far-right reactionaries. And these men are now beginning to see emerging A.I. tools as a way to hurt, and maybe finally vanquish, those who make the human-created media that they seem to hate so much. There are a lot of folks scoffing at the idea that these tools could never successfully do this, and, based on the teeth and fingers above, we definitely aren’t there yet, but we should maybe also start wondering what might happen if these tools get good enough to actually succeed.

And so when I say I am skeptical of AI as a woman, this is what I mean. Men wrenched information and computing technology and skillset expertise away from women, and especially women of color, when it became clear that doing so presented a profit motive. Indeed many bodies of information and skillsets that were originally developed and held by women, and again especially women of color, have historically been appropriated by men for their own profit and authoritarian control – abortion, and in general the field of obstetrics and gynecology, is another very good example. Today, we know that AI tech both objectivizes and erases women.

The AI zeitgeist is rooted in white men being so worried that they are on the verge of having to trust the expertise of people who aren’t just like them that they would rather get their information from a wrong robot.

The people building AI technology know that they are building “unbiased” systems of bias, because they have been told and shown this, and asked to correct it, time and time again by the very same communities they wish to exclude from their systems. Now, they are rushing the public to use these systems – moreover, to become swiftly beholden to them – before corrections to those biases are made.

This rush is clear in the key teleological flaw in the discourse around AI, and especially among its proponents. The idea that AI is best developed (or used, as folks like Francesco Marconi, who stands to materially benefit from this premise, claim) to do journalistic work is not evidence that AI is for same, and yet here we are:

This man straight out made this up! It is a lie. No journalists are seriously using AI tech “in secret.” Marconi was so enthusiastically dragged on Twitter for making this claim that he deleted the above tweet. But the imperative to treat AI as it exists today and as its profiteers wish us to adopt it, would be to  act as if the giraffe’s neck is already and obviously that long because the trees are so high.

I assume that AI has potential; I am no naysayer luddite when it comes to new technology. I dislike knee-jerk reactions to genuinely disruptive innovation. But when Big Tech Bros and Big Corporate Search tell me they’re going to ~ help ~ by making a robot do my critical analysis – my thinking – for me, I don’t consider that a convenient shortcut. I consider it a red fucking flag being used to distract me from something far more sinister going on behind the scenes.

I don’t know exactly what that sinister thing is, but I know that men telling me not to worry my pretty little head about something almost always means I should be scared as fuck. 

Because I am a woman.



One response to “The AI Zeitgeist is for Alpha Dudes Who Love to Be Wrong”

  1. […] entirely eluded me that there might be a gendered nuance to the various reactions and backlashes. Here’s Andrea Grimes educating dudes like […]

    Like

Leave a comment

Recent Posts