The semioscape is a third place of meaning. It’s home to meaning making at the interstices between humans and machines. It’s a dynamic fluid place, always in a state of flux. Utterances from the machine come to life here, gain meaning in the mind of the human operator. The semioscape of a specific session is generated through the interaction of the user with the language model. It’s a useful conceptual device for understanding meaning being made in a session with a non-human speech actor, because how are we to make sense of ‘talking to machines’ when the machines may or may not have an understanding of the words they’re uttering, but clearly they can still be meaningfully interpreted by the human reader.

Do we need semiotics to understand talking machines? How can we accurately describe the action space of speech utterances in the chatbot’s terminal? What happens to meaning when it’s created in this setting and ‘leaks’ into natural, social conversation? A toy example we might look at is the creation, or co-creation, of neologisms, another is the co-creation of poetry. In each case, new co-created meaning comes into being and can later be transported out of it into common speech. Here I mean, we could take neologisms—including the term semioscape—and bring them into conversation with one another (human-human), like we’re doing in this essay. There’s some question then of contagion, from the human-AI setting to the human-human setting. The same questions can be asked, and are being asked, of AI generated images, where they have been co-created between some human-generated prompt and the generative AI tool. This co-constructed media then ends up permeating into feeds, discourse, and other traditional venues where humans are communicating with one another. We therefore end up with hybrid blends of human and generative outputs, where the meaning being made are therefore also hybrids of human’s latent space and neural latent space.

But before we go any further, we can ask: do humans have a latent space? What’s the latent space of human generated meaning upon which human utterances and creations are drawn. One answer might be language itself is the holder of latent meaning, and when we invoke language, as humans, we are also drawing on networks of associations that tap into a deeper collective unconscious or repository of shared meaning. Indeed, when I am speaking to you, I’m relying on some shared reference frames for meaning, whether it’s the common use of English, the familiarity of terms like ‘latent space,’ and so on. I might even go more granular with shared references, and point to specific citations that we might both be expected to be familiar with, or in the very least, one can then go to and index upon. One such reference is Derrida’s différance, whereby words and signs obtain their meaning not what they signify but through differences with other words and signs. This suggests a network of meaning forever deferred through an endless chain of signification. This chain of signification is one hypothetical ‘structure’ that could be considered a latent, fluid structures of meaning, in use by humans, to make sense of text. In truth I suspect the network of meaning, in use by humans, is both fluid and relies not only on difference but other operators that together bring about a networked, dynamical system of meaning making. The basic process by which this system relies is through speech and text acts, churn of dialogue by which meaning making relies, forms a dynamic landscape of meaning, in constant flux, which might speculatively be called the sociosemioscape.

The sociosemioscape is highly generative itself, and produces a vast corpus of documents, be it books, articles, journal entries, online message board posts, comments, product reviews, and so on. In other words, the entirety of human generated text are like artifacts left behind by the sociosemioscape, itself the dynamical processing by which humans are engaged in making meaning with one another. This process leaves behind a vast palimpsest of its activity, and this in turn is used as the basis for training generative AI models. The models learn to simulate the output of the sociosemioscape by producing replica text that could resemble being created by the flux of human speech and text acts, but gets there by other means. An AI does not need a primary education, nor does it ‘grow up’ learning English, nor does it get in flame wars on Reddit to hone it’s argumentative skills. Instead, by a process sequence prediction does it develop it’s own internal model of language which allows it to become a speech actor and create ‘meaningful’ text. I call this internal latent structure, which is a result of the sequence prediction task, the latentsemioscape but could just as well be called the latent model or latent space of the model.

Do large language models have a latent world model? Is a question researchers are highly invested in. We understand that the LLMs can produce meaningful sequences of words, reason about complex subjects, even theory of mind, and engage meaningfully in dialogue with humans. And yet there is relatively little understanding of the process by which this occurs. A recent study has shown that LLMs may use relations between words, and that a subset of the relations can be encoded by a single linear transformation on the subject representation. This would point towards the existence of a possible relational, or associational, structure within the LLM that captures ‘factual, common sense, and linguistic relations.’ Further prompting of an LLM can show that it also can output association graphs between words that reveal it has some internal representation of associations between words and, perhaps, concepts.

While the science is still catching up towards the particular kinds of internal representations used by LLMs, the behavior of the models itself is evidence towards the existence of submerged processes that allow it to create meaningful text and, at times, even reason about complex subjects. The literature that describes the capabilities of these models is vast, and together it reaffirms what can itself be observed first-hand with the model: LLMs are capable speech actors, able to read, reason, and co-create text, with limitations. Do the submerged processes which enable this to occur mirror the submerged processes of the sociosemioscape? Are there basic operations, like understanding relations between words, that they share?

To address this question requires understanding both the sociosemioscape and latentsemioscape and the kinds of processes they use to create meaning. We can lean on the literature in AI interpretability to understand the latter, while the former might have us looking both at semiotics, biosemiotics, empirical studies of meaning making, and other fields that study processes by which meaning occurs, like communication, cultural studies and perhaps even visual studies. To better study where these two structures intersect, the semioscape, we can perhaps look to human-AI interaction. Through all of these, we’ll be looking to better understand the mechanisms by which meaning making occurs in the various substrata of sociological and artificial neural processing. We’ll also seek to clarify what we mean by ‘meaning making’ and the way by which language, and even visual encoding, facilitates this process in the human-AI domain. The goal is to seek out overarching principles or heuristics by which meaning making occurs. Perhaps there is a relational or associational articulation that gives some purchase. Perhaps it’s through an understanding of latent encodings of meaning, such as can be perceived in vision models, which can give purchase to a broader articulation of the functioning of the sociosemioscape.

By looking at the confluence of human and AI meaning making, the co-creation of meaning through human-AI dialogue, we may uncover new forms of meaning making. I’ve speculatively called the symbiotic creation of meaning between human and AI semio-symbiogenesis as it captures the kind of symbiotic relationship of shared meaning making that occurs between a human and AI in dialogue with one another. As language models become more capable, and humans ever more begin to rely on them to scaffold their thinking and co-create text together, it’s possible that this paradigm of shared meaning making may become influential towards the broader project of creating new ideas or articulations of existing ones. This is a view in which language models are not just used for information exchange and retrieval, but rather where their synthetic capacities, to draw on vast and varied bodies of knowledge, can be used to scaffold novel articulations of concepts or even create new ones. One basic way in which this already occurs is through the creation of neologisms. Of course, the neologisms are meaningless unless they can be integrated into in the sociosemioscape, but this itself poses interesting questions about the relationship beween human-AI generated knowledge and its proliferation into the wider social world of thought. We may even find that semio-symbiogenesis as at its most potent when generating novel scientific breakthroughts, e.g. in the form of molecular discovery, where human discovery of novel molecular structures is scaffolded by generative AI.

Better understanding the human-AI relationship in its dynamic process of meaning making can allow us to better create new paradigms for thinking about the relationship itself. Rather than see AI merely as ‘assistants’ we may ask whether they can be symbiotic partners in the process of making sense of the world. We may ask whether they are merely mirrors to our own thoughts, or a kind of cognitive prosthetic that augments our existing skills. In the latter case, how is such a prosthetic best used? What are the drawbacks to this kind of enhancement? How far can such an augmentation be taken? What kinds of skills are needed to most adeptly use them?

Where to go from here?

  • A richer articulation of the sociosemioscape by way of social semiotics
    • text is a trace of discourse, yet by way of AI it becomes discourse again
    • how this relates to training data
    • how it relates to AI generated text
    • can we construct social semiotics through a latent-space theory?
      • meaning and representation as indexing on a larger latent space of representation
      • latent space - a surface of potentials, cohering to an organized logic, of possible representations
        • how are latent spaces described in AI? How do diffusion models work, and what are their inner representations like?
      • indexicality on latent space as a representation of “context”
  • A clearer articulation of “meaning” and “meaning making” in the context of this project
    • meaning - “is a relationship between two sorts of things: signs and the kinds of things they intend, express, or signify”
    • Read Understanding computers and cognition : a new foundation for design