Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

The Digital Turn in Architecture 1992 - 2012
The Digital Turn in Architecture 1992 - 2012
The Digital Turn in Architecture 1992 - 2012
Ebook515 pages4 hours

The Digital Turn in Architecture 1992 - 2012

By Mario Carpo (Editor)

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Now almost 20 years old, the digital turn in architecture has already gone through several stages and phases. Architectural Design (AD) has captured them all – from folding to cyberspace, nonlinearity and hypersurfaces, from versioning to scripting, emergence, information modelling and parametricism. It has recorded and interpreted the spirit of the times with vivid documentary precision, fostering and often anticipating crucial architectural and theoretical developments. This anthology of AD’s most salient articles is chronologically and thematically arranged to provide a complete historical timeline of the recent rise to pre-eminence of computer-based design and production. Mario Carpo provides an astute overview of the recent history of digital design in his comprehensive introductory essay and in his leaders to each original text. A much needed pedagogical and research tool for students and scholars, this synopsis also relates the present state of digitality in architecture to the history and theory of its recent development and trends, and raises issues of crucial importance for the contemporary practice of the design professions. 
  • A comprehensive anthology on digital architecture edited by one of its most eminent scholars in this field, Mario Carpo.
  • Includes seminal texts by Bernard Cache, Peter Eisenman, John Frazer, Charles Jencks, Greg Lynn, Achim Menges and Patrik Schumacher.
  • Features key works by FOA, Frank Gehry, Zaha Hadid, Ali Rahim, Lars Spuybroek/NOX, Kas Oosterhuis and SHoP.
LanguageEnglish
PublisherWiley
Release dateJun 27, 2013
ISBN9781118425916
The Digital Turn in Architecture 1992 - 2012

Related to The Digital Turn in Architecture 1992 - 2012

Related ebooks

Architecture For You

View More

Reviews for The Digital Turn in Architecture 1992 - 2012

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    The Digital Turn in Architecture 1992 - 2012 - Mario Carpo

    Introduction

    Twenty Years of Digital Design

    Building a multistorey car park these days typically involves more digital technologies than were available to Frank Gehry’s office for the design of the Guggenheim Bilbao in the early 1990s. Yet few of today’s car parks are hailed as examples of digitally intelligent design. In fact, in the first instance, a meaningful building of the digital age is not just any building that was designed and built using digital tools: it is one that could not have been either designed or built without them. Alert designers have ideas about what the new tools are and what they can do, and this intelligence – among many other things – inspires them to imagine unprecedented solutions.

    Following from these premises, and looking for salient ideas, we scoured the last 20 years of Architectural Design, and chose 26 essays on digital design matters published from 1992 onwards. They are republished here unabridged and unedited (except for some typos), accompanied when possible by the original illustrations, and in chronological order, so the sequence illustrates the unfolding of digital design theory over time. For sure, not all that matters in digital design was published in AD; but a lot of it was. Since Andreas Papadakis’s purchase of the magazine in the late 1970s, AD was for long the main venue for debate on architectural Post-Modernism, and at times almost the official organ of the movement. Fifteen years later, having mostly missed the Deconstructivist turn, and shortly before Papadakis’s departure, AD started to embrace digital design with an enthusiasm almost equal to that of its pristine endorsement of the PoMos – a commitment to the digital cause that has continued unabated to this day.¹ The original footnotes of the articles included frequent references to other relevant titles on digital matters, and some bibliographic notes have been added in the introductions. All together, and in spite of evident limits to the range of sources and number of entries, this reader represents, we believe, a comprehensive synopsis of the recent history of digital design and its theory.

    As some of the earlier essays evince (see in particular Eisenman, pages 16–22 and 23–7, and Frazer, pages 49–52), the beginnings of the digital turn in architecture were first and foremost a matter of inspiration – and perhaps fascination. Electronic technologies in the early 1990s were changing – some thought, revolutionising – society, economy, culture and almost every aspect of daily life. So much was changing and so fast that some architects started to think that design should change too. No one quite knew how back then, and the first design experiments, in the spirit of the time, assumed that virtual reality, and cyberspace, would represent a radical alternative to the physical space of phenomena, existence and building. Some also concluded that many activities and functions would soon migrate from physical space to cyberspace, and that the design of new electronic venues in bits and bytes would soon replace the design of traditional buildings in bricks and mortar. AD devoted two noted Profiles to Architects in Cyberspace (in 1995 and 1998; a third one, also guest-edited by Neil Spiller, was planned in 2002 but the title was eventually changed to Reflexive Architecture). However, even at a time when many young architects thought that their future would be in web design, the development of new digital tools for design and fabrication suggested that electronics would drastically change the making of physical buildings as well.

    The emergence of a new digital tectonics in the early 1990s paralleled the technical development of spline modellers, a new generation of software that, thanks to the more general availability of cheap processing power, allowed the manipulation of curved lines directly on the screen, using graphic interfaces such as vectors and control points. The calculus-based parametric notations of the curves themselves thus became practically irrelevant, but two mathematical aspects of this spline-dominated environment have had vast and lasting design consequences: first, digital splines should be continuous (otherwise they could not be derived, mathematically, and the system would stop working); second, spline curves are variable within limits, as they are notated as parametric functions. Setting limits for the variations of one or more parameters is the crucial design choice that determines the instantiation of a family of curves (lines, or surfaces). In turn, the idea of a generic, open-ended, parametric notation (as in Deleuze and Cache’s Objectile, on which more will be said below) implies the possibility that authorship may be split between more agents – on one side, the designers of the general function; on the other, its final customisers, or interactors (see Frazer, pages 53–6, and Cache, pages 146–51). This basic set of notions was and still is the warp and weft of digital design, and also the main reason why continuous lines and parametric variations remain to this day the hallmark of digitally inspired architecture.

    Computers are famously versatile machines, and they do not express aesthetic preferences. One could use CAD/CAM technologies to mass-produce boxes and blobs, indifferently. However, unlike boxes, blobs cannot be mass-produced in the absence of digital tools (and in fact they were never mass-produced until recently). Would this simple argument: sheer technical supply – we make blobs because we can – be enough to explain the lasting tie between digital design and smooth curves? Probably not. The essays collected in this reader prove that curving folds emerged in the early 1990s as a design strategy internal to the architectural debate of the time – as a deliberate mediation or synthesis between Post-Modern unity of form and Deconstructivist fragmentation. Lynn’s theory of folding, Allen’s ‘field conditions’, and Foreign Office Architects’ writing of the time, among others, reiterate with different nuances this statement of principle; computers, splines and animation software were seldom or never mentioned (see Lynn, pages 29–44; FOA, pages 58–61; and Allen, pages 63–79). In retrospect, this current of digital design does appear like a continuation of Deconstructivism with digital means, and indeed many of today’s star architects came to the continuous folds of digital design after training in the angular fractures of Deconstructivism – a legacy that still shows in the work of many, from Zaha Hadid to Gehry to some works of Eisenman himself. But the so-called ‘Deleuze Connection’ (in architecture, the influence of Deleuze’s theory of the fold, relayed to designers through Bernard Cache’s technological interpretation of the Objectile: see pages 148–51) adds further layers to this story.

    The immediate architectural fallout of Deleuze’s fold, and most likely one reason for Eisenman’s and Lynn’s interest in the matter, was Deleuze’s exegesis of Leibniz’s mathematics of continuity, of calculus-based points of inflection (the ‘fold’) and parametric notations (the ‘objectile’). But, through Deleuze, it was a whole post-modern universe of thinking that offered itself, sometimes covertly or inadvertently, to the then nascent theory of digital design. Deleuze was interested in calculus as a quintessentially modern language of differentiality: calculus describes variations of variations, and is of no use in cultures or societies that are foreign to that notion (such as most of classical antiquity, for example). But in an odd reversal of alliances, the modernity of calculus as a notation of variations also partakes in a generically post-modern pattern of rhizomatic variability, complexity and fragmentation (of narratives, ‘récits’ or ‘strong referentials’). Deleuzian, post-modern variability was the cultural framework within which digital technologies were first put to task to design and produce variations (variations in form and variations in series, or mass customisation), and in this more general sense the digital turn in architecture can also be seen as a belated vindication of some of the principles of Post-Modern architecture itself: against Modernist standardisation, the PoMos had argued for differentiation, variation and choice; almost one generation later, digital technologies provided the most suitable technical means to that end. A philosopher and historian could even argue that, in a typical cultural-technical feedback loop, post-modern culture was the ‘favourable environment’ where digital technologies took root and to which they adapted to finally evolve in the way they did.

    Be that as it may, the essays in this reader provide ample evidence of another, equally pervasive kinship between architectural Post-Modernism and digital design. Systems theory, complexity science and the so-called theory of self-organising systems were part of the legacy that early cybernetics had bequeathed to contemporary digital design,² but they were destined to an odd revival of sorts in the 1990s. These and related theories on indeterminacy, chaos, etc often merged with various morphogenetic metaphors which are particularly apt at describing the digital dialectic between script (code, genotype) and parametric variations (phenotypical adaptations). In the mid 1990s, Charles Jencks offered a comprehensive assessment of this notional field under the term of ‘nonlinearity’ (in science and in design: see Jencks, pages 82–7 and 88–107). His synthesis did not immediately catch on, but most of the ideas underpinning it did. Oddly enough for a purportedly high-tech subject, this current of digital design theory was marked from the start by a starkly anti-technological bias.

    The theory of nonlinearity, or emergence, posits that, sometimes, nature ‘jumps’ from one state to another in sudden and unpredictable ways, which modern science can neither anticipate nor account for. The simile often invoked is that of a heap of sand, where new grains falling onto the top accrue in regular ways up to a catastrophic point when the heap collapses or, as some say, reorganises randomly (ie, the same experiment, repeated ad infinitum, always yields different results). This state of indeterminacy can in turn be interpreted in different ways: as a banal contingency, for example, due to a variety of factors (shortage of data, fallacy of modelling, external interference, etc); or as a general law of nature. The latter may not necessarily lead to animism, spiritism or black magic, but it does lead to endowing nature with some form of free will, opposite to modern science’s determinism (or, in theological terms, predestination); and once indeterminacy is seen as a state of nature, it is easy to see how computers, by the fuzzy ways they seem to work, may in turn be seen as ‘nonlinear’ thinking machines – or something similar to that. The notion that an electrically operated abacus can emulate the workings of nature and the faculties of human thinking may appear as a long shot; yet this is what research in artificial intelligence (and a lot of science fiction) is often about.

    Nonlinear arguments, often declined in tamer or more practical terms, are frequent within even mainstream digital design theory; they have occasionally acquired more controversial Nietzschean, Bergsonian, übermenschliche and vitalistic overtones (if the universe is ‘dancing on the verge of chaos’, the hero or the artist alone can bend it to serve their will); and in many ways they continue to underpin a robust romantic, often irrationalistic approach to digital design. In the early times of digital design, anti-mechanistic ideologies often inspired ‘ethereal’ and psychologistic notions of cyberspace and of digitally mediated, immersive environments (see Spuybroek, pages 109–16, and Oosterhuis, pages 117–23);³ in more recent times, they have equally furthered a more spiritual approach to digital tectonics, to the magic of materials and to the uniqueness of craft (whether manual, digital or digitally enhanced). Similar ideas were in part revived, in a more technological and less spiritual form, by the ‘emergence’ theories of the early 2000s, which still inform various practices of contemporary (2012) digital design, particularly in the field of ‘performative’ design experimentation (see Hensel, Menges and Weinstock, pages 160–64, and Menges, pages 165–81). Even though direct reference to phenomenology is rare, this side of digital design theory can be seen as an apparently unlikely but extremely fertile technologising of architectural phenomenology – or as a new phenomenology for the digital age. This convoluted genealogy of ideas also warrants the frequent albeit often understated or tacit sympathy of many digital designers for all the foes of industrial modernity – from Romanticism to Expressionism to Organicism. Not without reason: by chance or by design, digital technologies mass-produce variations and customise non-standards; they are anti-industrial hence post-modern, both in the philosophical and in the architectural sense of the term, and perhaps anti-modern in general as well.

    An enthusiastically anti-technological endorsement of new technologies is an improbable intellectual construction. Yet this dual, almost schizophrenic nature of digital theory was an essential component of digitally intelligent design from its very beginnings. It is embedded in, and derives from, the dual genealogy and double allegiance of digital architecture (kindred with Deconstructivism on one side, and with architectural Post-Modernism on the other), and its ambiguity is in many ways the ambiguity of post-modern thinking itself, at times a critique of the modernity from which it derived, at times a reactionary anti-modern stance in the political sense of the term. Not coincidentally, digital design theory was also the harbinger of and the training ground for much end-of-millennium post-critical thinking in architecture, and many digital designers in the 1990s (and sometimes beyond) professed a free-market, neoliberal political creed (albeit this aspect of digital design history is scantly represented in this anthology).

    When the dotcom crash of 2000 and 2001 stifled the wave of technological optimism and ‘irrational exuberance’ which had accompanied the rise of digitally inspired architecture in the 1990s, all the basics of today’s digital design theory – of digital theory as known to date – were already on the table. In the years that followed, as technology continued to evolve, digital design went through major theoretical developments alongside adaptions, fine-tuning and steady progress in practical implementation – almost all of which, however, occurred within the general theoretical ambit that was defined in the 1990s, or extrapolating from some of its trends.

    The essays republished here bear vivid witness to the ‘crisis of scale’ which marked non-standard form-making around the turn of the century: digital mass customisation, which had been proven to work effectively at the small scale of industrial design and fabrication, did not perform well at the full scale of construction (see Lynn, pages 45–7 and 125–30; FOA, pages 58–61; SHoP, pages 132–34; and Cache, pages 152–57. The shift from form-making to process that ensued prompted the adoption of new software for information exchange and for the management of building and construction tasks; this family of software, known under the generic name of Building Information Modelling (BIM), has been taking on increasingly important design roles (see SHoP, pages 135–45, and Garber, pages 227–39). At the opposite end of the digital design spectrum, a new generation of digital blobmakers⁴ offered new definitions of architectural curvilinearity, involving at times a rejection of their predecessors’ theoretical stance, and at times, quite to the contrary, a new theoretical awareness of the aesthetic implications of formalism (see Rahim and Jamelle, pages 213–20, and Foster Gage, pages 221–25). The universality of digital curve-making has been posited, in the strongest terms ever, by Patrik Schumacher’s theory of parametricism (see Schumacher, pages 243–57). But arguably the most significant digital development of the 21st century, the participatory turn in its multifarious manifestations, often generically known as the Web 2.0, has found but a feeble echo among the design professions. Interactivity and responsiveness have long been staples of electronically augmented environments, but automatic variations in most buildings are necessarily limited to non-constructive features, such as environmental controls, lighting, gadgetry or occasionally to moving parts of curtain walls or other surfaces. Participation in design is another story altogether.

    Today, many technologically savvy designers use open-source software, or exploit other collaborative aspects of networked technologies (see Hight and Perry, page 189–200, and Morel, pages 201–07), but few or none envisage to develop open-sourceable architectural design – ie, design notations that others could use and modify at will. Design and building have always been participatory endeavours, as no one (except perhaps Henry D Thoreau, or Robinson Crusoe) can make a building all alone. Participatory authorship is inherent in the very idea of parametricism, from its Deleuzian beginnings, and BIM software was developed precisely to facilitate the exchange of digitised information among the many agents – human and technical alike – that must interact in large design and construction projects; not surprisingly, recent BIM software is increasingly fostering and facilitating collaborative and even collective decision-making strategies. Yet individual authorship has long been such an essential aspect of modern architecture that one can easily understand the mixed feelings of the design professions vis-à-vis a techno-social development that many feel might threaten or diminish the architect’s traditional authorial role.

    This reticence, however, is creating a gap between the digital environment at large, which is fast embracing generalised models of mass collaboration at all levels, with potentially huge intellectual, economic and social consequences, and the culture of digital design, which in this instance at least appears to have taken another path. This apparent decoupling is an ominous sign, as design theory used to be at the forefront of digital innovation: in the 1990s architects and architectural theoreticians were pioneers of the digital frontier – which is probably one reason why end-of-millennium digitally inspired architecture was so eminently and almost universally successful. The best digital architecture of that time was not only inspired by digital technologies – it was a source of technological and aesthetic education for all. Ten years later, it is difficult to be inspired by a soi-disant cutting-edge architectural style which has now been repeating itself, in a kind of precocious mannerism, for at least 15 years; much in the same way as young architects may find it difficult to be inspired by soi-disant cutting-edge design theories which are often recapitulating, rephrasing or revising ideas that have been around since the early 1990s. Not surprisingly, many in the design professions – including noted AD authors like Neil Spiller – have recently started to look elsewhere for the next big thing. In fact, AD itself has already tentatively inaugurated a new age of post-digital high-tech.

    Some of this gloom may be premature. This digital reader is not meant as an epitaph or obituary. The shift from mechanical to digital technologies is a major historical turning point, and the makings and unmakings of the digital age should be assessed in the long view of the history of cultural technologies. This is particularly true for architecture, because architecture itself, as we know it, is the fruit of the early modern techno-cultural invention of architectural notations and of authorial design, and follows from that bizarre Renaissance claim that a new kind of humanist author, called an architect, should make drawings, not buildings. Today’s computers do not work that way and, after upending the Modernist tenets of mass production, economy of scale and standardisation – that was the easy part – digital tools for design and construction are now unmaking the Albertian, humanistic principles of allographic notation. The resulting shift, from mass customisation to mass participation, may be more disruptive for architectural production than the digitally induced dominion of the spline to which we are now almost getting accustomed: again, that was the easy part.⁶ If history has something to teach – the digital history of the last 20 years, as well as the architectural history of the last 20 centuries – the best, or at least the most momentous days of the digital turn may still be ahead of us.

    Notes

    1 In 1992 Andreas Papadakis, the Editor and proprietor of Architectural Design (AD), sold AD as part of the Academy Group, the architectural and design publishers he owned, to VCH, a large German scientific, technical and professional publisher. Papadakis continued with VCH as Editorial Director of Academy until the end of 1994. On his departure, Maggie Toy took over as Editor of AD. In 1997 the American-owned scientific and professional publishers John Wiley & Sons acquired VCH and with it AD and the Academy architecture list; Helen Castle has been Editor of AD since 2000. Throughout the 1990s, under Papadakis and then Toy, while generously promoting the first steps of digital design, AD continued to feature Classicists and Post-Modernists, with sometimes bizarre juxtapositions: in 1993, Greg Lynn’s seminal digital Profile, Folding in Architecture, was included in an AD issue mostly featuring Russian neoclassical architecture (AD Profile 102, AD 63, March–April 1993. Reprinted London: Wiley-Academy, 2004).

    2 On the rift between early architectural applications of cybernetics, shape grammars and today’s visually oriented spline modellers, see Frazer, pages 49–52, and McCullough, pages 183–87.

    3 Among the first artists and theoreticians of virtuality, Marcos Novak contributed seminal essays and artwork to the AD ‘cyberspace’ issues of the 1990s, including the image reproduced on the cover of this book.

    4 The architectural blob was famously invented by Greg Lynn in the 1990s: see his ‘Blobs (or Why Tectonics is Square and Topology is Groovy)’, ANY 14, May 1996, pp 58–62.

    5 See Neil Spiller and Rachel Armstrong (guest-editors), Protocell Architecture, AD Profile 210, AD 81 March–April 2011, p 17, where the design of new materials (including, more esoterically, the design of living materials suitable for building) is presented as the new technological frontier of architecture.

    6 See Mario Carpo, The Alphabet and the Algorithm, MIT Press (Cambridge, MA), 2011).

    © 2012 John Wiley & Sons Ltd.

    Architecture After the Age of Printing (1992)

    Peter Eisenman

    These two essays by Peter Eisenman inaugurate digital discourse in architecture in the 1990s, and highlight the continuity between Deconstructivism and the first age of digital design. The contrast between the photograph and the telefax, cited in both essays, refers less to image-making than to the different nature of mechanical and digital reproducibility: unlike mechanical copies, which once printed are fixed and stable, digital images derive from number-based notations, or files, that can morph and change all the time. In Eisenman’s reading, the new paradigm of electronic mediation destabilises and ‘dislocates’ centuries-old habits of anthropocentric vision, rooted in the monocular, perspectival tradition and in the modern technologies of mechanical reproduction, and should inspire and prod architects to further contest ‘the space of classical vision’ and break ‘the gridded space of the Cartesian order’.

    In the first essay Eisenman also refers to Gilles Deleuze’s theory of ‘the fold’ (from Deleuze’s book The Fold, Leibniz and the Baroque, first published in French in 1988, and which would inspire a seminal issue of AD, guest-edited by Greg Lynn with major contributions by Eisenman himself: see pages 28–47). Folding, Eisenman argues, may provide a new ‘strategy for dislocating vision’, by subverting the hierarchy of interior and exterior and by weakening the notational correspondence between drawing and building. The second essay republished here, ‘The Affects of Singularity’, does not mention Deleuze but refers to ‘singularity’ as a new ontological condition of the subject – significantly, not of the object, yet, Eisenman suggests, equally opposed to the mechanical ideas of standardisation and repetition, and in sync with the new technical logic of electronics.

    Both Deleuze’s ‘fold’ and the notion of digital singularity will become topoi of digital discourse in the 1990s. Similar notions of ‘singularity’ also generally refer to the new post-modern differentiation of subjects and objects alike. But in 1992 the digital is not yet a tool for a new mode of design, or even less for building; the rise of electronics is seen here as a general techno-cultural shift that should inspire architects to engage with an unprecedented cultural environment and with a new view of the world. Electronics, in Eisenman’s view, vindicate and corroborate the stance of all the historical enemies of the dominion of the classical eye. Deleuze’s Folding is seen as a new Deconstructivist weapon of choice, and the forthcoming digital fold as a continuation of Deconstructivism by electronic means.

    Visions Unfolding: Architecture in the Age of Electronic Media AD September–October 1992

    During the 50 years since the Second World War, a paradigm shift has taken place that should have profoundly affected architecture: this was the shift from the mechanical paradigm to the electronic one. This change can be simply understood by comparing the impact of the role of the human subject on such primary modes of reproduction as the photograph and the fax; the photograph within the mechanical paradigm, the fax within the electronic one.

    In photographic reproduction the subject still maintains a controlled interaction with the object. A photograph can be developed with more or less contrast, texture or clarity. The photograph can be said to remain in the control of human vision. The human subject thus retains its function as interpreter, as discursive function. With the fax, the subject is no longer called upon to interpret, for reproduction takes place without any control or adjustment. The fax also challenges the concept of originality. While in a photograph the original reproduction still retains a privileged value, in facsimile transmission the original remains intact but with no differentiating value since it is no longer sent. The mutual devaluation of both original and copy is not the only transformation affected by the electronic paradigm. The entire nature of what we have come to know as the reality of our world has been called into question by the invasion of media into everyday life. For reality always demanded that our vision be interpretive.

    How have these developments affected architecture? Since architecture has traditionally housed value as well as fact, one would imagine that architecture would have been greatly transformed. But this is not the case, for architecture seems little changed at all. This in itself ought to warrant investigation, since architecture has traditionally been a bastion of what is considered to be the real. Metaphors such as house and home, bricks and mortar, foundations and shelter attest to architecture’s role in defining what we consider to be real. Clearly, a change in the everyday concepts of reality should have had some effect on architecture. It did not because the mechanical

    Enjoying the preview?
    Page 1 of 1