Friday 27 Lectures and workshops

IMPROTECH Paris - Αθηνα 2019



Detailed program

Thursday Sept. 26

Friday Sept. 27

Saturday Sept. 28

Sunday Sept. 29



Friday sept 27 - Lectures

University of Athens, 09:30 - 13:30

Algorithms, AI and Improvisation



Movies

Conference recordings on YouTube

09:30

Keynote talk : Perception, embodiment, and expressivity in human and computer improvisation

George Tzanetakis (University of Victoria, Ca)

The majority of research in computer systems for composition and improvisation has been based on symbolic representations and follows a stylistic imitation paradigm. There are some inherent limitations to these approaches that are especially apparent in the context of improvisation. After a brief overview of existing approaches I will argue that in order to create more effective improvisation systems it is critical to integrate perception, embodiment, and expressivity and also consider audio representations. This integration will be motivated using specific examples from human, computer, and human-computer improvisation scenarios. It is my hope that this exploration will help us better understand and appreciate the complexity of music improvisation and inspire future research that considers perception, embodiment, and expressivity.

10:15

It Ain’t Over Till Its Over: Theory of Mind, Social Intelligence and Improvising Machines

Ian Gold and Eric Lewis (McGill University, Ca)

Improvising machine systems have made remarkable advances in the appropriateness of their contributions to collective improvisations. It has, however, proven to be intractably difficult to create an improvising system that seems aware, to the same degree that experienced human improvisers are, of when a collective improvisation is coming to an end. We explore the role that theory of mind plays in collective improvisation, and suggest that machine failures to manifest theory of mind may be behind this failure. We suggest that a false model of collective improvisation, and a false model of theory of mind, has occluded the importance of theory of mind to collective improvisation. We also survey a number of experiments that we hope to undertake to help establish the connections we hypothesize, and suggest what this may mean for the future of improvising machine system design, and for the role of improvisation in assorted therapeutic contexts.

10:45

Improvising with augmented organ and singing instruments: gesture, sound, music (Cantor digitalis)

Christophe d’Alessandro (Sorbonne University, Fr)

In this talk I present a reflection on my practice of improvisation with the augmented pipe organ and voice instruments.

In the augmented organ, the pipe sounds are captured, transformed and then played back in real time in the same acoustic space as the direct pipe sounds. Our augmented organ projects rely on three main aesthetic principles: microphony (proximal sound capture), fusion (of acoustic and electro-acoustic sounds) and instrumentality (no fixed support or external sound source). The augmented organ can be played in solo or duo (organist + live-electronics player). Solo performance is more challenging, as the organist must control additional interfaces, when his hands and feet are already busy with keyboard, pedalboard, expression pedals and combination and registration.
Performative vocal synthesis allows for singing or speaking with the borrowed voice of another. The relationship of embodiment between the singer’s gestures and the vocal sound produced is broken. A voice is singing, with realism, expressivity and musicality, but it is not the musician’s own voice, and a vocal apparatus does not control it. These instruments allow for voice deconstruction, voice imitation, voice extension. Specific vocal gestures are replaced by hand gestures on control interfaces like graphic tablets, MPE keyboards, and even the (augmented) Theremin.
s I will argue that the augmented organ (including extended techniques and new control interfaces) is in the continuity rather than the break with the organ improvisation tradition. Pipe organs are complex timbral synthesisers, which have always accompanied the evolution of music and technology. Improvising with performative vocal synthesis is a more disturbing experience: because linguistic meaning, vocal intimacy and personality are mixed or even confused in vocal performances, at the (possibly interesting) risk of an “uncanny valley” effect.

11:15

Coffee Break

11:45

Creativity, blending and improvisation: a case study on harmony

Emilios Cambouropoulos (Aristotle University of Thessalonikin, Gr.)

One of the most advanced modes of creativity involves making associations between different conceptual spaces and combining seemingly unrelated constituent elements into novel meaningful wholes. Composers and improvisers often actively employ combinational and fusion strategies in producing original music creations. In this presentation we focus on issues of harmonic representation and learning from data, giving special attention to the role of conceptual blending in melodic harmonization. Models are presented for statistical learning of harmonic concepts (chord types and transitions, cadences and voice-leading) from musical pieces drawn from diverse idioms (such as tonal, modal, jazz, octatonic, atonal, traditional harmonic idioms). Then, a computational account of concept invention via conceptual blending is described that yields original blended harmonic spaces. The CHAMELEON melodic harmonisation assistant (new online version), produces novel harmonisations in diverse musical idioms for given melodies and, also, blends different harmonic spaces giving rise to new ‘unexpected’ outcomes. Many musical examples will be given that illustrate the creative potential of the system. Such sophisticated blending methodologies can be incorporated in interactive improvisation systems allowing the creation and exploration of novel musical spaces (bypassing mere imitation).

12:15

Do the math: Musical creativity and improvisation under the spectrum of information science

Maximos Kaliakatsos-Papakostas (Ionian University, Gr.)

Musical scores include information that is mostly sufficient to reproduce a musical work or the performance of an improvisational agent; this information can be considered as "low-level", if micro-timing, performance-specific or timbre-related information is disregarded. High-level structures emerge from patterns that combine low-level attributes: cadences, harmony and rhythm, among others, are higher-level constructions that build upon fine-grained combinations of low-level elements. Humans have the ability to implicitly identify such structures and readily employ them when listening, composing or improvising music, but to what extent can such human cognitive processes be algorithmically modelled? What would such modelling be practically good for? This lecture presents the problem of algorithmically modelling music cognition and creativity through methods of information science. Particular focus is placed on pattern extraction through generalisation (or information reduction) which is directly related to statistical learning. An intuitive presentation of the relations between these concepts and deep learning is given and, finally, some thoughts are openly discussed with the audience about how the latest advances in Machine Learning can be of practical use to the composer, the improviser or the music enthusiast.

12:45

Children’s improvisations using reflexive interaction technologies – Computational music analysis in the European Project MIROR

Christina Anagnostopoulou, Aggeliki Triantafyllaki, Antonis Alexakis (University of Athens, Gr.)

While improvisation has been an essential component of music throughout history, its manifestation in children’s music-making is a debated issue (Azzara, 2002). At the same time, research has revealed that improvisation is a significant aspect of children’s musical development and an important venue of creativity (Webster, 2002; Ashley, 2009). When children are improvising, particularly at an early stage of development, they usually try to express themselves without following any particular rules. Creativity then can emerge naturally (Koutsoupidou & Hargreaves, 2009). New technologies can support this natural development and help children develop their own musical style.
The European Project MIROR (Musical Interaction Relying on Reflexion, mirorproject.eu) was based on a novel spiral design approach involving coupled interactions between computational and psycho-pedagogical issues. It introduced an AI-based improvisation system, the MIROR- IMPRO (Pachet et al. 2011), based on the original Continuator (Pachet 2003). The project integrated various psychological experiments, aiming to test cognitive hypotheses concerning the mirroring behaviour and the learning efficacy of the system, and validation studies aiming at developing the software in concrete educational settings. The philosophy behind the project was to promote the reflexive interactive paradigm not only in the field of music learning but more generally as a paradigm for establishing a synergy between learning and cognition in the context of child/machine interaction (Addessi et al. 2015).
In the present paper we explore the thesis that the computational music analysis of children’s musical improvisations who use the above technology in order to find regularities and patterns of significance, can provide a useful addition and a valuable tool that can render even more constructively the blending of technology into children’s musical routine. On one hand, a tool is offered to assist the teacher in providing the musical dictions and on the other, the tool can provide the learner with a means which independently advances his/her musical capabilities through playful interaction. In order to achieve this, we employed specialised data-mining techniques and developed a set of lexicographically empowered investigation software tools to analyse the musical corpus produced by the children’s improvisations. We present part of our results on the analysis on children’s improvisations, and we discuss the general advancements that the MIROR Project offered in the area of children’s improvisations.


13:30

Lunch Break




Friday sept 27 - Workshops

Onassis STEGI, 16:00 - 19:00

Body and Drama



16:00 - 17:00

Kinaesonics: crafting and being trans-dimensional (Bodycoder system)

Mark Bokowiec, Julie Wilson-Bokowiec (University of Huddersfield, UK)

In this workshop/demo we will unpack our particular approach to Kinaesonic composition and the multi-dimensional nature of our particular brand of live performance with the Bodycoder System. We will explore the critical intersections where liveness meets the programmed and the automated, consider the aesthetic as well as the socio-political implications and discuss the role and qualities of improvisation employed in the new work we will present at Onassis STEGI.

17:00 - 18:00

Interactive Drama Tools

George Petras (National School of Dance, Gr.), Panagiotis E. Tsagkarakis (Free-lance Engineer), Anastasia Georgaki (University of Athens, Gr.)

The use of novel interactive technologies in performative arts provide dynamic tools for improvisation and expressiveness of the actor/musician during a performance.
Our research focuses on the development of interactive tools used in the context of ancient Greek Drama and Prosodic recitation.
Firstly, we will present theoretical and practical aspects regarding the use of voice in drama performance. How we use individual elements of ancient Greek prosody as also transposed ancient music theories (such as the curve of “logodes melos” by Aristoxenus) for the interactive process.
Secondly, we’ll be presenting the technical and practical aspects regarding the interactive platform. We’ll be explaining areas like sensors, data extraction, mapping, and sound design.
The interactive tools built to provide and develop the improvisation ability of the performer, in two ways: sonic improvisation and structural improvisation. The sonic improvisation is achieved by focusing on voice and sound processing, where the performer manipulates the sonic outcome in order to enhance the prosodic interaction and emotional meaning on the text. The structural improvisation allows the performer to move between scenes freely since he controls the ques with gestures and key positions in space.
The workshop includes a short-term performance presenting the interactive platform in action. It aims to show how the theoretical, technical and performative aspects merge.

18:00 - 19:00

Collective performance and improvisation using CoMo-Elements

Michelle Agnes Magalhaes (composer, Ircam, Fr.), Frédéric Bevilacqua (researcher, Ircam, Fr.)

Using the web application CoMo-Elements (como.ircam.fr), this workshop proposes an approach to collective performance using of mobile phones, used as separate as motion sensors and interactive sound systems. Each mobile can be “played” using gestures. The application allows for users to design their own gestures and associate them to specific sounds. Additionally, all mobiles can be synchronized and remotely controlled, allowing for musical structures to be either composed and performed, or more directly improvised collectively. We will present the CoMo-Element system, along with examples of possible uses of the system allowing participants to explore various possibilities. Musical materiel and small musical pieces by Michelle Agnes Magalhaes will be proposed to the participants to be collectively experimented and discussed.