Interactive free improvisation using time-domain extrapolation of textural features
PDF

Palavras-chave

Improvisação musical não idiomática

Como Citar

SCAHUB, Stéphan. Interactive free improvisation using time-domain extrapolation of textural features. NICS Reports, Campinas, SP, v. 6, n. 18, p. 1–11, 2017. Disponível em: https://econtents.bc.unicamp.br/pas/index.php/nicsreports/article/view/163. Acesso em: 3 jul. 2024.

Resumo

No seguinte artigo, apresentamos um sistema interativo para improvisação musical não idiomática. A abordagem geral assume que um músico improvisando nesse contexto não restringe os elementos de sua linguagem musical a um alfabeto fixo, evita gramáticas pré-estabelecidas, concentra-se na articulação e continuidade do fluxo musical através da constante antecipação de elementos futuros. Usando técnicas de classificação de áudio, mapeamos cada frase tocada e as transpomos em um espaço vetorial que representa características texturais. O sistema então prediz continuidades possíveis para a sequência em curso e reinjeta segmentos passados segundo instruções (controladas manualmente) para “contrastar” ou “seguir” as predições. O sistema foi testado com um saxofonista profissional e demostrou ser um ambiente de improvisação coerente e reativo, além de apontar para possibilidades de futuras ampliações.

PDF

Referências

Pressing, J. (1998), “Psychological Constraints on Improvisational Expertise and Communication”, in In the Course of Performance: Studies in the World of Musical

Improvisation. Ed. Nettl B. and Russel, M. University of Chicago Press, Chicago, p.

-74.

Lewis, G. (2000), “Too Many Notes: Computers, Complexity and Culture in Voyager”,

Leonardo Music Journal, v.10,p. 33-39.

Bown , O. (2011), “Experiments in Modular Design for the Creative Composition of Live

Algorithms”, Computer Music Journal, 35:3, pp. 73-85.

Bown , O., Eldriedge A., McCormack, J.(2009), “Understanding Interaction in Contemporary Digital Music from Instruments to Behavioural Objects”, Organised Sound.

:2, pp. 188-196.

McLean, A., Wiggins, G. A. (2010), “Bricolage Programming in the Creative Arts”. in:

http://yaxu.org/writing/ppig.pdf. Accessed 15/10/2013.

Hiller, L.A, Isaacson. L. M. (1959), “Experimental Music: Composition with an Electronic Computer”, McGraw-Hill Book Company, New York.

Conklin, D. and Witten, I. H. (1995-2002), “Multiple Viewpoint Systems for Music Prediction”, Journal of New Music Research, Vol 24/1,1995, p. 51-73, (revised version

.

Cope, D. (2004), Virtual Music: Computer Synthesis of Musical Style, MIT Press.

Pachet, F. (2002) “The Continuator: Musical Interaction With Style”, in Proceedings of

the ICMC.

Assayag, G. and Bloch, G. and Chemillier, M. and Cont, A. and Dubnov, S. (2004),

“Omax Brothers: a Dynamic Topology of Agents for Improvization Learning”, in

Workshop on Audio and Music Computing for Multimedia.

Biles, J. A. (1994), “GenJam: A Genetic Algorithm for Generating Jazz Solos”, in Proceedings of the ICMC.

Franklin. J.A. (2004), “Predicting reinforcement of pitch sequences via lstm and td”, in:

Proceedings of International Computer Music Conference (ICMA), Miami.

Meyer, Leonard B.(1956), Emotion and Meaning in Music, Chicago University Press,

Chicago.

Huron, D. (2006), Sweet Anticipation: Music and the Psychologie of Expectation. MIT

Press, Cambridge.

Pressing, J. (1988), “Improvisation: Methods and Models”, in: Sloboda, J. Generative

Processes in Music, Clarendon, Oxford, pp. 129-178.

Conklin, D. (2003), “Music Generation from Statistical Models”, in: Proceedings of the

AISB 2003 Symposium on Artificial Intelligence and Creativity in the Arts and Sciences, Aberystwyth, Wales,pp. 30-35.

Cont, A., Dubnov, S. and Assayag, G., (2007), “Anticipatory Model of Musical Style

Imitation Using Collaborative and Competitive Reinforcement Learning”, in Butz,

M. V., Sigaud, O, Pezzulo, G, and Baldassarre, G., Eds Anticipatory Behavior in

Adaptive Learning Systems, Springer-Verlag, Berlin, Heidelberg, p. 285–306.

Puckette, M. S. and Brown, J.C. (1998), “Accuracy of frequency estimates using the phase

vocoder”, in: Speech and Audio Processing, IEEE Transactions on, v. 6:2,pp. 166-

Jian, M.H., Lin, C. H., and Chen A. L. P. (2007), “Continuously matching episode rules

for predicting future events over event streams”, in: Proc. Joint Conference of the

th Asia-Pacific Web Conference, 2007.

Creative Commons License
Este trabalho está licenciado sob uma licença Creative Commons Attribution 4.0 International License.

Copyright (c) 2017 NICS Reports

Downloads

Não há dados estatísticos.