Part of the methodology of constellations and the methods of AI research I’ve taken so far is to conduct a series of behavioral “probes” by engaging in careful, deliberate dialogue with Large Language Models (LLMs). They’re rooted in the intrinsic call-and-response nature of interacting with the models that create a feedback loop where with each prompt and reply, new text or are created by way of novel configurations of weights and, likely, submerged constructs that are touched through the prompting.
Entries with AI probes:
- 2024-01-17 - Probing latent apple-space in Dall-E3
- 2024-01-18 - Semio-symbiogenesis: A conversation between bards
The basic idea is that one can probe and reveal something about the structures formed within the model through carefully advancing, prompt by prompt, into a shared domain that is established within human working memory (long and short term) and the context window maintained by the model in it’s session. Through each interaction, the human and the model both change. This is conditioned on myriad other factors, from the environment in which the probe is conducted (virtual and physical), to the flux of hidden interactions between weights in the model, and human feedback (performed by the companies) that continues to fine tune it.
That this approach can work is itself highly speculative. Yet it reveals its efficacy through the conversations and feedback loops established in the interaction and the kinds of patterns it surfaces that give way to submerged structures. For example, in 2024-01-17 we observe scales of apples that are simple evidence of the model’s ability to represent apples, but also show that it has the ability to transform the surface texture and apple details (like the apple leaf’s color and weathered texture) to satisfy the prompt of creating a scale of increasingly sad-looking apples. That we can direct it to achieve this, even with detailed prompting, suggests the presence of underlying models/structures that can create the image. How to separate them from one another, or reveal low-level structures that are preconditions to higher order representations (structures that for example relate to form, light, apple-ness, so on) will be the subject of future probes.
What’s tricky is the circular dependency that feedback creates whereby that the model output that is revealed is conditioned on human input and steering. Yet to increasingly effectively ‘steer’ the model is to from an awareness of the patterns revealed, creating my own working mental model of how to effectively direct the model towards the kinds of outputs that I believe reveal something about it. Learning to ride a bike might not reveal anything about the specific mechanical construction of the bike, but rather the dynamics created by those mechanics and a feedback loop of riding the bike that enables it to be steered and ridden to new places. Perhaps the metaphor is best ended there. “Riding” an LLM is multidimensional as language itself, the space of exploration, so vast, relies on establishing new heuristics and refining them into models. A way of creating these heuristics has been to coin neologisms that describe this exquisitely high-dimensional activity of control and feedback. Indeed, one goal of this project, may be to create a refined set of heuristics for AI use and map out some of the latent potentials that lie within it as a medium for creativity and collaborative problem-solving.
Another area of interest is the question of human-AI resonance. Resonance itself is underexplored, but it suggests that in the co-creative process both humans and LLMs are reaching into each other’s deeper, submurged structures that sit beneath the surface of language. (A post-structuralist account might suggest that these hidden structures are simply yet more language.) In essence, ‘resonance’ is what art at its greatest manages to achieve, but now that we create hybrid resonances that are a result of the human and a cognitive aid, in how they attune to each other to create a heightened flow state in the human. Through this process it may be possible to access new realms of interconcept space which open up transformed subjectivities that are not yet well understood. What I mean by this is that the process of finding resonance, possibly through imposing additional structure like a co-creation task that’s constrained by some rules (e.g. writing poetry, playing chess, solving a coding task) creates an operational closure (Luhmann) of the human-AI assemblage that allows the two agents to become a single, closed system and thus able to better respond to its environment (the task) or simply reduce the influence of environmental stimulus (external surroundings, noise in the corpus, noise in the human-AI communication from language, so on).
What’s the overall goal of the project?
-
Understand human-AI interaction from a socio-semiotic perspective, drawing upon structuralist and post-structuralist linguistics, systems theory, cybernetics, and so on.
- Conceptualize models as actors in the sociosemioscape —> what does it mean to be a language actor as a model? What is the epistemic and ontological standing of the writing they produce? This depends on how we conceptualize the models!
-
Investigate the latent spaces and world models that are implicit in large AI models through behavioral probes. Understand their properties, boundaries, as well as how they may serve to act as a design medium for better using and prompting the models.
- A great outcome would be better understanding the latent potentials in AI models, creating new baseline behavioral tasks that could be used for benchmarking future models.
- Can we understand and measure resonance? Can we observe this as an emergent behavior? E.g. through how effectively/quickly the task is solved, other measures of attunement (possibly subtle control decisions and nudges made by each actor)
-
Present LLMs as cognitive amplification devices —> create a robust ontology for what the models are, again by revealing their latent potentials and how they exist as a design medium
- Ontological device - ability to polish and clarify ways of knowing and seeing to bring greater clarity to a hazy idea —> specifically drawing out the “colors” in the mind of the operator and making them shine
- lay lady lay
- better understand this specific dynamic of being able to interpret vague expressions of the user and iteratively refine them into something that matches and even enhances the image in the mind of the user
- acting on the principle “i know it when i see it”
- Ontological device - ability to polish and clarify ways of knowing and seeing to bring greater clarity to a hazy idea —> specifically drawing out the “colors” in the mind of the operator and making them shine