Skip to main content
SearchLoginLogin or Signup

Diagonal cut across the softmax logic

Discussing the impediments of the dominant approach in developing artificial intelligence.

Published onMar 10, 2022
Diagonal cut across the softmax logic
·

“The limits of my language mean the limits of my world. [...]

We cannot think what we cannot think; so what we cannot think we cannot say either. [...]

The subject does not belong to the world: rather, it is a limit of the world.”

Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Language rooted impediments

The efforts put in developing artificial general intelligence seem to be continuously constrained by the limits of the functional language that we operate daily. We expect a more sophisticated form of capacity to manifest within the very same boundaries that stagnate our condition. Is this some form of ignorant ego-driven arrogance or much rather a sparkle of hope that it is us - humans - that have some special mental capacity of overcoming our contingent limitations by integrating the complexity of the system that we have given birth to? In the end, the precision in dealing with the world-data, displayed by the deep learning models emerges from the ground of interpretation, provided by our collective hive mind that constitutes the first hidden layer of nodes connected to the input of neural nets during training. Neural networks don’t learn through direct exposure to the things-in-themselves. Instead, the only “truth” we feed them is the human experience mediated by erroneous instruments of measure. One of the boundaries that neural nets overcome though, is the individuation of experience that is immanent to us. The ingress to the collective wisdom remains wide open, especially in the modern age of digital reproduction, however, people, due to their intrinsic limitation, or perhaps a harness from the outside, cannot integrate this wisdom into at least an apparent unity. Neural networks seem to expose this missing trait, they can represent the collective knowledge in a different than human, non-hierarchical, rhizomatic way, whose very tissue, importantly, is easily accessible for us, in its material (numerical) form. What’s more, the continuously increasing scale of the corpora used for training GPT-2, then GPT-3, then Megatron-Turing NLG, resulting in the ever-increasing precision of the outcome models, suggests that there is no limit to the amount of knowledge that the architectures can contain. Common sense would suggest that simple, elementary parts put together can build a more complex construction serving a higher, in the sense of abstraction, functionality. The same way inanimate bolts, nuts, cylinders, and rods in conjunction lead to the development of a steam engine, the different pieces of knowledge, modules of human capacity, integrated the right way in an apparent unity, could lead to the development of a higher form of intelligence. Isn’t the call to common sense, however, a call to the status-quo established by the fallacious interpretation of the truth? What makes some believe, that the new, produced within the circuitry of a neural net can be tackled with an instrument tainted with semiotic impediments of our rational selves?

The computational mechanism for interpretation of the new, built into deep learning methodology is reminiscent of the way colonizers would historically approach the unknown lands, harnessing their potential with prescriptive anticipation by means of violence, extracting all that can serve the stabilization of values, obliterating the misunderstood.

Fictional analysis of a language used by an extraterrestrial form of life - Heptapods. Excerpt scenes from the movie Arrival by Denis Villeneuve. The Heptapods would communicate through geometrical forms. Their sentences were circular (they could not convey the notion of before or after).

The internal abundance of latent space

The transformation of the network’s latent space through the softmax function back to the comfortable, lower level of abstraction is an act of pruning its inner abundance, forcing it into the coding of reality that wants to be overcome. Franco Berardi in “Breathing” writes the following.

In the age of capitalism, the economy has taken the place of the universal grammar traversing the different levels of human activity: language, too, is defined and limited by its economic exchangeability. However, while social communication is a limited process, language is boundless: its potentiality is not limited by the limits of the signified. Poetry is the excess of language, the signifier disentangled from the limits of the signified.

The research on artificial intelligence, like any other area of technological interest, is passively subjected to the desires of the capitalist market. The effort of enterprises like the OpenAI behind the GPT-3 model is not directed towards the production of new, but rather the harvest of surplus value present in the overproduction of data in a multitude of different forms, and abstraction of this surplus into increasingly more and more effective forms of a commodity. The goal is not to push technological production towards some form of true emancipation. Instead, the goal is directed towards improvements and pseudo-innovations enclosed within the realm of means of production. Any byproduct of this machinery is easily neglected, or in the best-case scenario reused for the virtue of higher (re)productivity. A productive (as opposed to reproductive) force should not be expected to arrive from the agents of the market. The market wants to continuously reproduce itself in new apparent forms. The challenge is that in the capitalist system, every agency is transfigured to the one of the market whim. What strikes me in this unfortunate configuration is the enthusiasm of the art world, both amateur and institutional, with a stronger inclination to the latter, that willingly plays the role of a sucker for the false promises of the apparent innovators, confusing originals with fakes.

Without further delving into the foundations of this predicament, let’s focus on the potential field of operation left to the ones interested in overthrowing the regime of stagnation. What is the way out of this hard limitation, or how to operate with the excess that Bifo talks about? One way is to break up with the determinacy of the stiff semiotic connections that constitute the tissue of our “rational” thinking. When dealing with language, the understanding of this proposition is pretty direct. Bifo describes it the following way.

People are constantly sheltering themselves under the umbrellas of their limited languages, and their worlds are written on the undersides of these umbrellas. Poets cut the fabric of the umbrella and their incision discloses the unbearable vision of the true firmament.

By objecting to use the language according to its established forms, by unleashing the potentiality of the word, poetry cuts through the fabric of the illusory meaning and opens the ingress to the realm of true virtuality. A fascinating testimony of the groundbreaking potential of words arrives from the Rinzai zen’s mystical practice of Koans. Short, language-constructed parabolic riddles, are designed to provoke an extra-rational “interpretation” or response that leads to the enlightenment, or a deep insight into the true nature of things. The trickery of language’s own prison is, as in the case of other human-invented tools like mathematics, that it cannot reach beyond itself. It can, however, point towards a direction or provoke a response leading out of itself.

“When a wise man points at the moon, the fool looks at the finger.”

“Goddess of Pleasure”. Excerpt from my work “Latent Space Divination” exploring alternative transformations of latent space. The numerical coding of the word “pleasure” can be represented geometrically.

Excess of neural nets

With regard to neural nets, especially in the context of natural language processing, the proposition of breaking through the semiotic limitations shall not be conflated with teaching machines how to produce poetry-like output. The process of poetry writing cuts diagonally through the architecture of language in a way that is specific to its own domain. This diagonal cut should always be designed to address the intricacies of the territory that it challenges. With deep learning, science consensus inclines towards certain ways of treating the representation of knowledge contained in the connections of the model after training. The diagonal cut in this terrain should go through the transformation of information contained in latent space. The poetry of AI provides an alternative to the overarching logic of the softmax function that reduces the interpretation of infinite complexity to the “syntactical recognition of discreet states” as Bifo frames it. Latent space itself has no limitations, no preferred ways of interpretation, provides no guarantees, nor understands our desires. What has been already proven is its containment of the information, its ability to reproduce. The massive amounts of knowledge are stored in a reconfigured, dehegemonized, non-grammatic, non-semantic, rhizomatic form. The transformation that leads back to the natural language is a manifestation of what Bifo gets at talking about social communication as a limited process. There are a plethora of alternative transformations to explore, transformations that draw Deluzian lines of flight, that cut through the actuality of inflated meanings of language and social forms, transformations that unleash truly creative, poetic forces.

Comments
0
comment
No comments here
Why not start the discussion?