A data glove based sensor interface to expressively control musical processes

Share Embed


Descrição do Produto

A Data Glove Based Sensor Interface to Expressively Control Musical Processes Giovanni Saggio1, Franco Giannini1, Massimiliano Todisco1, Giovanni Costantini1,2 1 Dept. of Electronic Engineering, University of “Tor Vergata” Via del Politecnico 1, 00133 Rome, Italy 2 Institute of Acoustic “O. M. Corbino” Via del Fosso del Cavaliere 100, 00133 Rome, Italy [email protected] Abstract Flexible sensors can find many useful applications detecting vibrations, contacts and impacts, air and liquid flows, pressures and compressions, displacements and motions. So they are utilized in the fields of robotic, medical, fitness, assistive technology, gaming, etc. But we want here point out the adoption of such sensors for realizing a data glove, capable to associate a sound to each single movement of every joints of the fingers of a human hand. In addiction force sensing resistors, applied to each fingertips, measure of the pressure applied on a surface when the hands mimic the gestures of a pianist. Previous works were already devoted to play sounds according to body movements, but as far as we know, to this aim no works take advantage on the measure of the complete degrees of freedom of a human hand. To this end, we present an innovative data glove based sensor interface that allows an electronic music composer to plan and conduct the musical expressivity of a performer. For musical expressivity we mean all those execution techniques and modalities that a performer has to follow in order to satisfy common musical aesthetics, as well as the desiderata of the composer. The proposed interface or virtual musical instrument is able to transform input parameters, supplied by hand movement, in many sound synthesis parameters. Especially, we focus our attention on mapping strategies based on Neural Network to solve the problem of music expressivity. Introduction Traditional musical sound is a direct result of the interaction between a performer and a musical instrument, based on complex phenomena, such as creativeness, feeling, skill, muscular and nervous system actions, movement of the limbs, all of them being the foundation of musical expressivity. Actually, musical instruments transduce movements of a performer into sound. Moreover, they require two or more control inputs to generate a single sound. For example, the loudness of the sound can be controlled by means of a bow, a mouthpiece, or by plucking a string. The pitch is controlled separately, for example by means of fingering which changes the length of an air column or of a string. The sound produced is characteristic of the musical instrument itself and depends on a multitude of time-varying physics quantities, such as frequencies, amplitudes, and phases of its sinusoidal partials [1]. The way music is composed and performed changes dramatically [2] when, to control the synthesis parameters of a sound generator, we use input devices such as kinematic, optical or electromagnetic sensors, as well as humancomputer interfaces, such as mouse, keyboard, touch screen or gestural control interfaces [3,4,5,6,7]. As regards musical

expressivity, it is important to define how to map few input data onto a lot of synthesis parameters. Materials and Methods The here adopted bend sensors consist of plastic films printed with carbon inks (Fig. 1), which are capable to increase their resistance the more are bent. This is because of stress micro fractures purposefully introduced (Fig. 2), condition the sensor element to react to moments of bending.

Figure 1: A bend sensor

Figure 2: SEM photo of the sensor’s surface With an home-made set-up we completely analyzed and characterized these sensors for their electrical static and dynamic behavior. With the results previously reported [8], an electrical equivalent circuit was proposed, based on resistance, capacitance and inductance elements. But we are here uniquely interested in the electrical resistance variations vs. angle of bending since, to the aim of this work, the reactance properties of the bens sensor can be neglected with no meaningful drawbacks. So, our measurement set-up consisted this time of hinges, stepper motors with their power supplies, anti-vibration supports, and multimeters. The hinge is made of a knuckle through which a central circular pin is passed, and two notched leafs extend laterally from the knuckle. One of the leaf is fixed with the pin, so capable to revolve together it, while the other one is maintained fixed. A stepper motor, with its central axis jointed with the pin, can rotate the revolving leaf, so simulating the movements of a human joint (Fig. 3).

To take advantages of the sensor demonstrated properties, we inserted them in closed sleeves on top of a Lycra glove in correspondence of each finger joints, so to obtain electrical resistance variations useful to provide the position of every joints of a human hand (Fig. 5). So, the Distal Interphalangeal (DIP), Proximal Interphalangeal (PIP), MetacarpoPhalangeal (MCP) finger joints could be traced in movements. Other four sensors were utilized between each finger’s pair to obtain measures of the abdu-adduction movements, and an accelerometer was utilized for the measure of the wrist movements. In addition five force sensing resistors were inserted into the glove in correspondence to each fingertips, to measure the pressure with which the hand taps onto a rigid surface. Figure 3 The sensors under investigation were posed, one at a time, laying on the hinge, so to be bent according to the hinge movements. Results of our measures are reported in the graph of Fig. 4. They are related to a bend sensor of the Flexpoint Inc., 2 inches long, with a polymide overlaminate. It is demonstrated how an un-flexed sensor has a nominal resistance of the order of 10kΩ, but as the sensor is bent its resistance gradually increase with an initial non linear behavior, further becoming approximately linear after the 30 degree of bending angle. Since the maximum angle of flexion for a finger joint is 120° (in particular for the joint between metacarpal and proximal phalange), all measurements were performed from 0° (flat position for the sensor) to 120°, with a step of 10°, iterating 10 times, with 10 acquisitions of the resistance values for every step. The maximum amount of resistance was determined to be around 160kΩ for a bending angle of 120°. The very low standard deviation (drawn superimposed in the figure to each 10° step) implies that these sensors can be successfully adopted to measure the bending of a human joint with an accuracy of the order of the arcdegree.

Figure 4

Figure 5: Sensors and wires applied on a support glove

In this way all the degree of freedom of a hand could be measured and utilized to converted into musical notes. The sensors were then used in conjunction with a voltage divider to provide changing voltages, which were furthermore electronically conditioned to be set between 0 and 5 Voltage values. Resistance values recorded from the sensors were converted into voltage signals and then fed into Arduino Mega board [9]. The Arduino Mega is a microcontroller based on the ATmega1280 processor. It has 54 digital input/output pins (of which 14 can be used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. Neural Network structure and Learning rule An artificial Neural Network [10] is a mathematical model for information processing based on a connectionist approach to computation, inspired by the human brain. In a Neural Network model, simple nodes, called "neurons" or "units", are connected together to form a network of nodes. The strength of a connection between a neuron and another is influenced by a weight value. A typical Neural Network is arranged with three layers of neurons: input, hidden, and output layer. In this context, we consider

feed-forward architectures only, where the information signal propagates from input layer, through intermediate or hidden layer, to output layer, with no loops back to previous neurons. This Neural Network is known as Multilayer Preceptron (MLP) or FeedForward Backpropagation Neural Network (FFBPNN), due to its learning algorithm. FFBPNN’s input layer and output layer represent the points of contact of the net with the external environment, while the hidden layer contributes to form the non linear relations existing between inputs and outputs. A key property of a Neural Network is its ability to acquire knowledge by examples. Learning is an iterative process of adjustment applied to the synaptic weights of the network in response to an external stimulus. In particular, we will consider only Neural Networks trained by means of supervised learning: a training set, which contains both the input patterns and the corresponding desired outputs (or target patterns), is presented iteratively to the network with the aim of implementing a mapping that matches the training examples as closely as possible. Weights are iteratively modified through two passages, which represent an epoch (backpropagation algorithm): 1. A pattern input is proposed to the network input and then it is propagated to the network output (forward pass); than the error E as the squared difference between desired and actual output is calculated; 2. The error E is back-propagated (backward pass) and weights are updated according to the formula of gradient descent [10]. After all, a FFBPNN works like powerful mathematical trainable interpolation systems to calculate non-linear functions starting from desired inputs/output relationships. The complexity of interpolating function grows with the number of the neurons in the hidden layer, as well as learning capability. These kinds of structures are simple and effective, and have been exploited in a wide assortment of machine learning applications [10]. Sensor Interface and Mapping strategies The data glove musical instrument that we propose is constituted by three components: the sensor unit, the mapping unit and the synthesizer unit. Particularly, the signals supplied by the sensor unit, that realizes the interface between the performer and the system, don’t influence directly the parameters that rule the behaviour of the sound generators, but they are pre-processed by the mapping algorithm. In Figure 6, the structure of the data glove musical instrument is shown. To investigate the influence that mapping has on musical expression, let us consider some aspects of Information Theory and Perception Theory [11]: • the quality of a message, in terms of the information it conveys, increases with its originality, that is with its unpredictability; • information is not the same as the meaning it conveys: a maximum information message doesn’t make sense, if any listener that’s able to decode it doesn’t exist.

A perceptual paradox [12] illustrating how an analytic model fails in predicting what we perceive from what our senses transduce is the following: both maximum predictability and maximum unpredictability imply minimum information or even no information at all. Moreover, musical expressive message is a time variant information which moves between maximum predictability and minimum predictability. Now, it is obvious that the simple one-to-one mapping laws regarding traditional acoustical instruments leave room to a wide range of mapping strategies. Musical Expressivity Implementation The implementation of musical expressivity is accomplished once we define the correspondence between the n sensor outputs and the m synthesis parameters, that is to say, once we define the right mapping. Let’s assume the following concepts: 1. a predictable musical message be associated to an a priori known functional relation between the two n

m

hyperspaces  and  , that is to say, between the set of all the n sensor inputs and the set of all the m synthesis parameters; 2. an unpredictable musical message be associated to a non linear and a priori unknown correspondence n m between  and  .

A composer can easily follow the above assumptions by making use of a FFBPNN trained as follow: 1. he chooses the transducer to use, i.e. n physical parameters to measure (in our case, acceleration and angular velocity); 2. by means of the sensor unit, the system carries out the n control parameters; 3. he fixes a point in the n-dimensional hyperspace and he links it to a desired configuration of the m synthesis parameters; 4. he repeats D times step 3., so as to have D n-to-m examples at his disposal; they constitute the training set for the mapping unit; 5. he chooses the Neural Network structure, that means he chooses the number of hidden neurons to use, then he trains the Neural Network. 6. he explores the n-dimensional input hyperspace by moving through known and unknown points, with the aim of composing his piece of music. Control of Musical Processes: a Real-time Application As application of our data glove, we have developed a real-time musical performance. The synthesis process was realized by means of the sound synthesizer “Textures 2.1” [13] developed with standard VST [14] (Virtual Studio Technology from Steinberg ltd) audio plug-in. This musical performance has been developed by using the Max/MSP [15] environment. The signals supplied by the glove sensor unit realize the interface between the performer and the system. Particularly, we consider ten finger movements, corresponding to finger sensors, and three signals supplied by three mono-axial accelerometers. The synthesized sound of

Figure 6: Data Glove musical instrument

“Texture 2.1” is based on granular additive synthesis algorithm. There are seventeen sound synthesis parameters [13], regarding sliders and knobs, through which we can shape the sound waveform. Therefore, we have thirteen control inputs to operate on the seventeen parameters that influence the sound produced by the synthesizer. We have chosen twenty reference points in the thirteen-dimensional hyperspace of the finger movement and hand accelerations. Then, we have trained a FFBPNN with twenty neurons in the hidden layer and we have explored the thirteen-dimensional hyperspace. Finally, we have chosen, amongst many others, the movements the finger and the hand has to repeat in order to reproduce the interesting sounds discovered during exploration. Conclusions We have developed a glove based sensor interface for composing and performing expressive musical sound. We direct our attention to common musical aesthetics as a determinant factor in musical expressivity. The sensor interface, we have presented, is arranged by a sensor unit that supplies kinematics physical parameters. Particularly, these parameters are motion acceleration and finger movements processed by a FFBPNN based mapping unit that is able to provide suitable relationships between physical and sound synthesis parameters. The experiences made by working with our sensor interface have shown that the mapping strategy is a key element in providing musical sounds with expressivity. So, we have defined twenty composition rules in order that a musician can easily compose his own piece of music with our glove based sensor interface. At last, a musical composition was implemented, in which finger and hand movements of a performer were turned into expressive musical sound. References 1. Neville H. Fletcher, Thomas D. Rossing, The Physics of Musical Instruments, Springer, 2nd edition (July 22, 2005). 2. Curtis Roads, The computer music tutorial, The MIT Press, (February 27, 1996). 3. Bongers, B. 2000, Physical Interfaces in the Electronic Arts. Interaction Theory and Interfacing Techniques for Real-time Performance, In M. Wanderley and M. Battier,

eds. Trends in Gestural Control of Music. Ircam - Centre Pompidou. 4. Orio, N. 1999. "A Model for Human-Computer Interaction Based on the Recognition of Musical Gestures." Proceedings of the 1999 IEEE International Conference on Systems, Man and Cybernetics, pp. 333-338. 5. Jeong S.M., Song T.H., Jeong H.U., Kim M.J., Kwon K.H., Jeon J.W., “Game control using multiple sensors,” MoMM '09 Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia, Kuala Lumpur (Malaysia), 14-16 Dec. 2009 6. Headlee K., Koziupa T., Siwiak D., “Sonic Virtual Reality Game: How Does Your Body Sound?,” NIME2010, June 15-18, 2010, Sydney, Australia 7. Morales-Manzanares R., Morales E.F., Dannenberg R., “SICIB: An Interactive Music Composition System Using Body Movements,” Computer Music Journal, Volume 25, Number 2, Summer 2001, pp. 25-36 8. Orengo G., Saggio G., Bocchetti S., Giannini F., “Advanced characterization of piezoresistive sensors for human body movement tracking,” Nano-bio circuits fabrics and systems, ISCAS 2010, May 30th – June 2nd, Paris (France), pp.1181-1184. 9. Arduino, documentation avaible on the web at http://arduino.cc/ 10. Hertz J., A. Krogh & R.G. Palmer, Introduction to the theory of neural computation, Addison-Wesley Publishing Company, Reading Massachusetts, 1991. 11. Abraham Moles, Information Theory and Aesthetic Perception, University Of Illinois Press (1969). 12. Rudolf Arnheim, Entropy and Art: An Essay on Disorder and Order, University of California Press (January 29, 1974). 13. Giorgio Nottoli, “A sound texture synthesizer based on algorithmic generation of micro-polyphonies”, Proc. of 3nd International Conference “Understanding and creating Music”, Caserta, December 2003, 11-15. 14. Steinberg VST Audio Plug-Ins SDK, 3rd party developer support site at http://www.steinberg.net/324_1.html 15. Cycling74 Max/MSP, documentation avaible on the web at http://www.cycling74.com/products/maxmsp

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.