The here and now

Zylinska’s book AI Art: Machine Visions and Warped Dreams takes down the basic conceptual framings of the social understanding of the artifice in artificial and the premises of intelligence in the blanket term artificial intelligence, especially in its relations to art production (stemming from the Latin artificium to signal art and craft). Offers a view on the utopian versus dystopian view in current narratives on AI premised on a “model of the human as a self-enclosed non-techno-logical entity, involved in eternal battle with tekhnē”.  At the same time, she argues that the set of intelligence markers used to conceptualize AI are based on a disembodied and yet gendered model of human subjectivity. Key for understanding her arguments is that she positions her understanding of tekhnē as a constitutive fabric of human and nonhuman life and as the human as a profoundly technical being, connecting her understanding of technology to that of Flusser. This is partially visible in popular culture Zylinska recalls showing human characters who excel at pattern recognition and perform a narrow AI extracting patterns from large datasets.

She introduces us with the historical shift from an interest in general AI to narrow - Neural networks working on large datasets with genetic algorithms. This narrow concept is however taken from a familiar fatherly figure - Aristotle and his understanding of deduction, based on an assumption of a truth-value. The lightness of carrying this assumption is largely because of the unquestioned authority of Aristotelian logic, but also because the foundational aspects of AI are mostly used and taken for granted and adjusted to the interests of one’s direction of research. I find this concept as key for most matters regarding AI Zylinska discusses.

She brings up the Isaac Asimov’s Three Laws of Robotics, pointing to the fact that the “naively humanist” laws were written in the landscape of fiction. What is even more fascinating to me is that those laws are still used today in discussions on AI among people working in the field of technology. The “doing ethics as/in fiction” on the other hand does put forward an inspirational framework to exit the traps of the contemporary AI landscape, which is intertwined with the notion of technology as the other, outsourcing the human agency, as well as responsibility in producing and interacting in this technological environment. The question of ethics is also connected to the concept of general human values, that Zylinska also criticises as avoiding the elephant in the room, which is that the society in which we live in has built itself on inconsistent values - what protects some doesn’t protect all and is furthermore used to dehumanize the ones it doesn’t protect. And for this conversation, what is more relevant than an art conversation is to bring the discussion into policy, which is where I am completely in line with Zylinska and where I think art can have an engaging and relevant future, as a mediator of these conversations, the path of which I pursue in my art practice in policy. One point that she puts forward has to do with policy being instrumentalized by the private sector to outsource responsibility in applying and thinking ethics is probably the main blind spot of AI discussions, as can be seen in Microsoft’s principles for AI ethics, for example. However, the private sector is not alone in that matter. The university system, led by institutions such as MIT, has gotten intertwined in this discourse, producing further legitimacy for public concerns and how they can be managed.

And in dealing with issues that could be perceived as global, perception is flattened by ethical tech players such as the Reimagine AI, that Zylinska mentions, numbing and beautifying the question of, what Donna Haraway calls planetary survival, to controllable numbers. Most of the issues discussed today are on the level of a hyberobject multiplied by a black box. Impermeable, there is a certain sense of obtaining control of those elements that are in reach - such as visualizing data.  In a peculiar way the idea that visualization lulles the mind into thinking a problem is already contained by putting it on paper is heavily abused in the tech industry, and as such become even more prevalent in dealing with such hyperobjects, if we were to subscribe to the concept. An example of recent defense by visuals is the response YouTube provided after being accused of wrongfully taking down legitimate content during the COVID-19 pandemic by offering graphs that didn’t do anything other but visualize this same process.

Understanding the anthropocene as a technical problem to be solved manifested through what she calls masculinist solutionism. It is particularly interesting to think about how already in 2019, the sole term Anthropocene has been criticized for the same type of instrumental solutionist oriented human centeredness and has been substituted by discussions of the Planthropocene, Plantationscene.

Freeform directions for further thought

Art here and now is not only valuable but necessary. AI as a bridge to inquire into the labor of art production.

Interestingly enough, in Serbian, the phrase for artificial intelligence has no connection to the latin artificium and therefore art, while in the distinction between Serbian and Croation, the actual word for art has separate roots. In Serbian, the word is connected to the verb “umeti”, which means to be capable, the art of being able and in Croatian, umjetno from the word umjetnost - art - has a root meaning of umjetno - which is the same as artificial. Linking together, art is a building block of artificial and artifice of art, and they are manifested through practicing the art of being capable.

As for the future of art, I found fascinating that the Waterfall of Meaning project Zylinska refers to is actually a Google project in an exhibition, which on itself provides even a more intriguing concept - will the future of art exhibitions have megacorporation such as Google as ‘artists’ involved in the show?