OPENSYSTEMS Architecture 

Projects     Text     News     Info     Shop

Mark

New Archetypes

Published on Il Giornale Dell’Architettura (19 giugno 2023)



More than 70 years have passed since the first experiments with Artificial Intelligence and, for at least twenty years, parametric design, generative algorithms and, more generally, computation have introduced post-human elements in the way of conceiving and practicing architecture.
However, the renewed interest in Artificial Intelligence differs from previous explorations thanks to the concurrence of two fundamental factors in technological progress: the power and speed of the latest generation of supercomputers and the availability of an immense amount of data that has no precedents in the history of mankind.

The flow of data processed in real time controls many aspects of our lives: from bank transactions to geolocation technologies, from the consumer price index to air traffic and weather forecasts. The term Big Data refers to very large sets of data which, when analyzed by computers with large computing capabilities, reveal patterns, trends and associations that would remain incomprehensible if partially analysed. In essence: if a fact (data) occurred in the past, was recorded, filed, and can therefore be processed, under similar circumstances, that fact will re-occur in the future. It is based on this simple principle that the extraordinary predictive and optimization capacity determined by the combined effect of Big Data and Artificial Intelligence can be determined.

However, a problem must be quantifiable to be optimized. If, on the one hand, some aspects of architecture can be quantified and therefore processed by machines, on the other, architects still rely on knowledge deriving from experience, intuition, example or imitation of models: in other words, they make use of that tacitknowledge described by Polanyi as «knowing more than we can say». It is no coincidence that architects learn the secrets of their trade by frequenting the 'workshop' of the most experienced masters. In fact, not all architectural problems are quantifiable or algorithmically solvable.

The split between engineering and architectural skills has meant that the mathematics of architectural design gravitated from aspects related to structural design towards issues related to schemes, topologies, geometries, orientations and networks. Thus, the adoption of an increasingly quantitative approach - modularity, proportional relationships and environmental considerations, etc- has led to a departure from an empirical approach to architecture which lacked objective criteria and had historically characterized its indeterminacy. The development of this measurableknowledge, algorithmically reproducible and therefore transferable into the software, has led to a greater understanding of architectural design and its effects.

Today, the Generative Adversarial Networks (GAN) are systems capable of processing images (with similar textual description) by extrapolating common archetypal characteristics. In this way, GAN systems create original images by imitation of a reference archetype. For centuries, in the history of art as in architecture, there has been discussion about the relationship between the original and the copy, about how to interpret la bella maniera degli antichi, or even about the concept of style understood as a common trait and sensibility between a group of works. The current AI-powered software can produce, in a matter of seconds and by inputting any text, new archetypes thanks to to the hybridization of millions of data. Jean Baudrillard would called them simulacra: copies without originals. Old concepts, perhaps, but some around which, in the era of synthetic intelligence, designers may need to develop a renewed critical acumen and unprecedented creative abilities.







©2023