This paper presents Tidal-MerzA, a system that integrates affective modeling and reinforcement learning to generate musical patterns in the context of live coding. The system consists of two agents:
The first agent uses reinforcement learning to learn optimal weightings for the parameters of TidalCycles functions, such as loudness and pitch register, based on the target affective states defined by the ALCAA (Affective Live Coding Autonomous Agent) model.
The second agent generates mini-notation strings, which are a concise way to represent musical events in TidalCycles, by dynamically adjusting the importance of individual tokens based on the affective model. This agent handles the generation of rhythmic structure, mode, and pitch contour.
The combination of these two agents allows Tidal-MerzA to generate musical patterns that not only adhere to the syntactical correctness of TidalCycles code but also capture the desired emotional qualities. The reinforcement learning approach enables the system to learn and adapt over time, improving its ability to align the generated music with the specified affective dynamics.
The paper outlines the design and implementation of the two agents, including the state and action spaces, reward functions, and learning algorithms. It also discusses the advantages and limitations of the Tidal-MerzA system, highlighting its potential for enhancing the adaptability and creative potential of live coding practices.
לשפה אחרת
מתוכן המקור
arxiv.org
שאלות מעמיקות