legitsalt pending artist website :3

post #9: AI: medium specificity and the weight of history (and of tracing)

I wrote a whole thing last night about how my body is a writhing mass of eels right now, but I decided to take it down since it was maybe more dramatic than it needed to be and stuff. I'll probably try to reconstitute it into a poem or two. But that's besides the point for the here and now.

     I want to write about AI. My want in this case wells up both from within myself and from the social contexts I am weathered within (i.e., this cursed online and offline media ecology that imagines AI as an omnibus villain of visual media onto which any sin can be readily transposed).

     This post is motivated by a video essay I watched earlier today, "The Trans Debate: Political Science vs. Autotheory," by Marceline🦋. I wasn't familiar with Marceline beforehand (I found the video on my front page), thought I was somewhat familiar with the topics she was talking about. Overall, I thought the video was incredibly well done in many ways. The part that's relevent here is that Marceline argues that autotheory (I love the way she says it; it's very likely that i will unconsciously mimic her in this regard) may be valuable in part because there are dimensions of transness that it can make readable which are otherwise simply illegible to the traditional mechanisms of (political) scientism. Studies can be helpful, but they should not be held as the sole determinant of truth and objectivity, Marceline argues. Through autotheory, then, a trans woman can speak truth to the ways their specific circumstances have intersected with power systems, gender, etc. Instead of waiting for the day when a study comes along to definitively fix (i.e., fasten) how it is that people "become trans" (so to speak), autotheoretical matrices allow people to take action and gain insights in the here and now which can have observable positive impact on the lives of trans people.

text to break up the flow; maybe to come back with an image in its stead later

     I want to hang onto this notion that different methodologies make different elements visible (in ways that require different methodologies to see more elements) and try and wrangle it into the contours of the media studies concept of "medium specificity." In essence, medium specificity makes a parallel argument -- it says that each medium has things which it can do and things it cannot, i.e., painting is more suited to one kind of expression and written poetry to another. I've traditionally been of two minds about the concept. On the one hand, I think I agree that, yeah, novels and youtube shorts possess different capacities of meaning making, on the other, the historical maelstroms which have always been dog-eared onto medium specificity render the practical applications of the concept rather conservative. Medium specificity most frequently comes up in discussions within literature and film departments. At the beginning of the 20th century, when literary works were getting adapted to (silent) film, you have a number of authors from this time period (e.g., Virginia Woolf) writing complaints that the new medium is stepping on literatures toes -- why can't it do it's own thing, why can't it go and bother theatre and leave the sacred literary canon alone, etc. etc. This debate died down after a time as film for various reasons was simply here to stay and likely had too much capital sway to lose steam on its way to progress. Well, come the end of the 19th century, and the discourse erupts again, but this time it's the film folk saying that this new guy -- video -- is harshing their vibe. Now you have waves of scholars writing about how digital media is fundamentally broken and will only ever be a failed simulacrum of film's truly transcendent and necessary potential. To source their claims, scholars often turned to the semiotic concept of indexicality, saying that the filmic apparatus (the way films are shot, edited, and exhibited) has ineffable tethers to the elusive, but powerful sign that is the index (there are several types of indices and they all get attributed to film; most generally, though, you can think of an index as a pointing finger, something which indicates the existence of something else). Video fails to have an index and thus video is the half-child limping along in succession, uttering only wilted murmers of mimicry. I am being sensationalist here and presenting a straw man, but I don't think it's that far off from what scholars were saying. However, the argument about video is not only that it's a half-image (this is what some Japanese theoreticians call it), but also that its existence extemporaneously stains the veracity of film. Because video is reproducible and nonindexical and because video images are not visually that dissimilar from filmic images, people will lose trust in filmic images and thus the whole enterprise of the indexical filmic image (which at this point gets held as ransom against the weight of believing in something like history) becomes threatened in its entirety.

     I haven't been in the academy for 20 years or however long these discourse have been happening, but the arguments that I see scholars make in articles published around the turn of the 21st century and that I hear scholars make (i.e., in-person) today are, for all intents and purposes, overlapping with another. This constant worry that the advent of digital is but the mists of doom rolling in, filmic apocalypse right around the corner-- it seems like the worry persists but has yet to be meaningfully proven true (there are still film departments and except for exceptional cases digital media is but an addendum in academic spaces).

text to break up the flow; maybe to come back with an image in its stead later

     While the arguments have remained relatively the same, the media they negotiate over has been in a rapid flux -- this is most readily the case with digital. Most recently, the mainstream public perception of digital image technology has mutated in the wake of the proliferation of AI image technology. Most recently, that is, I hear (both within the academy and from outside it) people harbinging that AI image technology is the death of the digital image. Because of the existence of AI images (that AI images are known to be something that *can* exist), people go as far as to try and persecute digital artists they perceive as using AI image technology (whether they actually do or not). On the one hand, this would seem to affirm what film academicians have predicted for so long -- that advances in image technology threaten the sanctity of the enterprise (people lose faith).

     I don't really care about AI image technology -- it doesn't bother me in the ways I see people fearmonger about it -- but I've struggled to come up with a framework for arguing my position. (This is not really what I mean but I will pretend for the moment that it is). I think that Marceline's framing of the debate between autotheory and (political) science can serve as a useful overlay for letting me express what I think I'm trying to get across. The crucial first step to this is the acceptance of the ability for multiple perspectives to exist at once. While Marceline contests some of the claims that scientism makes (in general and with the specific study she's talking about), she doesn't seem to indicate that she's against the enterprise entirely. In fact, she collaborates with a biologist (a science field) to help explain a topic that she doesn't have as much of the prior knowledge on (gene mutation stuff etc.). Extending from this, I want to emphasize that I am not against any media technology or any type of image. While I contest film scholars' ontology of film, I don't wish to melt the flesh of the thing itself. It can vibe off in the corner, I don't care. Instead, I think it could be helpful to see the advent of new imaging technology, as with the advent of new theoretical methodology in the case of autotheory, as a way in which the capacities of the image, in general, expand. If autotheory indeed allows theoreticians to make visible experiences otherwise invisibilized by the dominant methodologies, I am suggesting that new image technology such as video or AI can make obtainable meanings otherwise unobtainable by the dominant image technologies. One of the major points wielded against AI art is that it looks yucky and bad and dumb -- that it fails to mimic the meanings available in an evidently authored work of art. While personal taste exists and should be taken into consideration for some kind of calculus, I feel like this argument is not sufficiently substantive. This argument presupposes that visibly authored meanings are the only valid meanings. One counter example (which does not operate in the same way as AI image meaning making, but which is a counter example nonetheless), is the images preeminent in forests. While nature photography and landscape paintings represent ways in which authors can superimpose themselves onto a forest space, I argue that forest spaces possess meaningful images even before they are captured or interfaced with by external authors (assuming the viewer doesn't count as an author) or image technologies. I personally have had meaningful experiences while in the woods. While this argument can be contested by talking about the authorial role people may have in shaping forest images, such as via landscaping or trailblazing, I feel that these person-originated interventions can never be wholly occluding (the settler is never complete in their dominance, there will always be a remainder). In other words, while I'm not hopping on here to say that AI images are peak, I think it's probably naive to say that they are failed images (less even than the half-image of the nascent digital).

text to break up the flow; maybe to come back with an image in its stead later

     Some ways AI images could potentially open up avenues of meaning making are via contesting such enterprises as history, copyright, and the original/source. AI images come from somewhere, no doubt about it. We can ask questions about what databases certain image technologies are drawing from and whether these images may be limited in the scope of what they can image. I'm not trying to say that AI image technology is perfect or without harm (though I think its harm is often overstated). But even though they come from somewhere, they don't share hardly anything of an identifiable trace (another name for an index) for where *precisely* they came from. According to one of my teachers last semester (who was a lovely person and an overall great person), this total lack of trace/index threatens to obliterate the meaning of history. If people are raised in a world where AI images exist in the mainstream, what use do photographs hold anymore? As an example for why this matters, my teacher talked about how photographs were used as evidence of the atrocities the Nazis committed during WWII, providing visible traces of the bloodstained evil that the Nazis had on their hands. Had photography not been the dominant image technology (for stationary images), then maybe these Nazis would not have been able to be punished for their sins -- maybe Holocaust deniers would be functionally vindicated. Generally I'm wary of these strong hypotheticals which assume bad faith. At the same time, it's undoubtable that what my teacher is worried about is attached to a real anxiety. It wouldn't be helpful, for example, were I to reply, "no that wouldn't happen, lol." Instead I want to suggest that images don't offer meaning only in how they correlate to history. While it seems like photography is rooted in history (this is potentially arguable as part of this claim stems from the belief that the photons from the initial event get inscribed exactly as they are in the photograph, something which just isn't true), AI images evidently are not (given that we define history in the indexical way that the media studies people are wanting to talk about). The lack of a historical root is not intrinsically a bad thing, I would say. At a certain level (maybe this is a straw man), film's (speaking here doubly about photography) root in history is also a root in institutions of authorship and copyright. Since essentially the beginning of film history, people have used film as an arena for flexing ownership (and thus capital) to the express demise or limitation of their peers. See, for example, Thomas Edison (famously scummy guy) who settled himself in film history not only in his patenting of film technology (cameras and film stock etc.) but also in his monopolization of films -- he flooded the market with Edison productions (some original and some foreign imports that he clipped off the title credits and replaced with his name). Edison used film to expressly enact tactics of predatory capitalism. While histories of independent film production exist, the largest players in the medium have always been studios -- and what is Hollywood if not a synecdoche for the evils of present day capitalism.

text to break up the flow; maybe to come back with an image in its stead later

     Therefore, when AI images are temporally volatile and are sourced from large databases of potentially copyrighted works, the image technology necessarily contests the singularity of copyright and authorship as the sole channels of legitimating (aka giving meaning to) an image. AI images are not illegetimate because they "steal art," they are legitimate in spite of this fact. Arguing otherwise duplicates a capitalist logic which sanctifies history and entrepreneurial genius. Yeah. That's most of what I had to say. But for the final nail in the coffin, I'll try to connect this with digital image technology as well. While digital images have not usually been paraded with the same certainty of authorship and historicity as filmic or photographic images, there has nevertheless developed a culture of declaring authorship into digital images. It's within these cultural contexts that "tracing" is seen as an explicit faux pas and sign of a morally failed artist. If it comes out that some artist X was tracing the art of artist Y, there are more than a handful of documented cases where public outrage at artist X for their discovered sins has grown to the point that artist X deletes their social media and retreats from the public eye. Of course, then, AI images are threatening in such a cultural climate. While AI images are largely non-indexical, depending on the databases they draw from, a knowing eye can spot what artists or artworks an AI image technology was trained on / is drawing from -- that is, a knowing eye can say that AI images are tracing. Unfortunately for these art sleuths, tracing is not a primordially immoral act. Images can be produced via tracing. The fact of tracing does not negate an image's image-ness. This is not to say that tracing is fundamentally good, but that traced art and non-traced art offer different avenues of meaning making. In other words, AI images move all the way through the petty squabble of tracing litigation to offer a type of image which is theoretically ONLY produced via tracing-approximate methodologies. This new image then offers a different kind of meaning making from a digital art produced without tracing (which values singular authors of distinct style and figure).

     To close out, accepting all of this requires a shift not only in the way artists and scholars conceive of images, but of the way the people conceive of images as meaningful. Pornographic images mean something different than anatomical diagrams or CCTV footage. Maybe some image technologies produce certain meanings easier or make available the production of new meanings. If companies (like Hollywood, for example) using AI to replace workers is evil, it is not because AI is evil in and of itself, but because companies are evil and anti-worker and probably it's fair to say anti-person. The Disney plus Marvel whatever show that used AI images in it's opening sequence (the secret war?) or the Coca Cola advertisement that used only AI images-- these are not morally wrong (if they can be said to be so) because they are AI, but because they are instances of corporations using technologies to displace their workers. That the technology is AI is incidental. While the extent to which corporations have invested in AI is indicative of something, I don't believe it inherently indicts the medium in and of itself. Each medium is specific. Let the protuberances grow into their own spires -- things don't always need to be stuffed into the same crystalline towers all the time.

::about essays::

i call it essays, but this will basically be a blog (or something approximating)
plan is to post text posts of various things i've been thinking about.
-=-=-=-=-
click here to go back to the main essays page

click here to go back to main essays page

To learn more HTML/CSS, check out these tutorials!