WEEKLY REFLECTION-week 7

The Summary and Reflection of Week 7 [ AI and Identities].

Core Reading

Negative Prompting:

A technique that involves crafting prompts semantically opposed to the target text, thereby guiding the model to sample from the image regions that are statistically farthest from the target text within the training distribution.



Reflection

My previous reflections on AI were confined merely to two questions: Will AI replace humans? and Can AI imitate humans? However, the author of this article focuses on a different set of inquiries: Who gets to determine what counts as an accurate outcome? And who decides which biases must be corrected and which can be retained? The article cites a highly representative example. Artist Swanson employed negative prompting in a text-to-image generation model, aiming to create an image diametrically opposed to Marlon Brando. Yet, the result was unexpectedly and consistently a woman’s face. This seemingly "eerie" AI-generated image is, in fact, a product of the collaborative exploration of the latent space by the artist and the model.

Moreover, the example of Auto-Tune in the article gave me a sense of unease. We tend to assume that technology is helping to correct flaws, but in reality, it may be subtly homogenizing our sensory thresholds—compressing diverse, imperfect forms of expression into a single type of "desirable content" sanctioned by algorithms. Through this article, I came to realize that contemporary machine learning and deep learning are no longer mere tools; they are quietly reshaping the way we experience the world. The internal computations of AI conceal a statistical realm invisible to users, and AI-generated images such as Loab exemplify these hidden statistical relationships.

The author also reminds us that AI systems inherit the historical biases embedded in statistics, and thus their outputs are not always neutral. AI can render culture monotonously uniform (much like how Auto-Tune has made voices increasingly indistinguishable), but it can also be transformed into an entirely new creative collaborator in the hands of artists. Ultimately, the author urges us not to view AI merely as a black box or a tool, but as an entity that is co-evolving new forms of experience alongside society, technology, and humanity.

During the workshop, I experimented with using negative prompting to guide an AI in generating a story. To me, negative prompting means deliberately instructing a generative AI: "Move away from a certain set of characteristics and toward what is statistically farthest from them." Reflecting on my usual interactions with generative AI, I mostly rely on positive prompting—for instance, asking it to produce more academic, polite, or well-structured content. In doing so, I am essentially following the dominant patterns the model has already learned. Negative prompting, however, reminds me that machine learning does not truly "understand" meaning; instead, it searches for the data points in a high-dimensional space that are either most similar or most dissimilar to the given prompts. The "creativity" of AI is, in fact, dictated by these statistical relationships.

When I asked the AI follow-up questions about the initial story outline it generated—such as "Why does every falling-out have to be painful?", "Why is conflict an inevitable part of estrangement?" and "Is sorrow the only possible emotion after a broken relationship?"—I was, in effect, renegotiating the boundaries of narrative possibility together with the model. This process also revealed which narratives in the training data are treated as default templates. Understanding negative prompting in this way has made me more aware that I am interacting with a prediction machine. I can leverage its predictive capabilities to complete tasks, but I can also intentionally challenge it, compelling it to temporarily deviate from its tendency toward homogenized outputs. This process is, in essence, a means of co-creating heterogeneous experiences with AI.