MSc Interactive Multimedia Susan Knabe 01097503T

Report on final MSc dissertation

 

Background

Being a musician, the area I want to work in for this dissertation is music synthesis. 
I am very interested in new technologies concerning the internet and what it allows 
people to do as well as artificial intelligence and the removal of the need for 
interfaces between humans and their senses or at least their increased usability and 
flexibility. 
My dissertation will combine all of these areas. I am planning to design and build 
an application that will allow users to draw using different mouse positions, colours, 
settings and shapes. Whatever they draw will then be translated into algorithms that 
will synthesise sounds. Creating a specific set of rules and making these known to the 
user, they will be able to create music just by drawing with a mouse. I want to place 
this application on the internet so that everybody who is interested in trying it out 
has access to it. 
It is my belief that human creativity is limited amongst other things by the need to 
translate our thoughts and emotions into other media in order to capture them or to 
communicate them to others. When a musician hears or "feels" music in their mind and 
wants to capture it they are always faced with the lack of proper means to do so. If 
they think of a complete piece of music, it is not enough to sing the melody and record 
it, because it would violate the complexity and the atmospheric properties of the music, 
neither is it sufficient to write down the musical score, as it would not represent the 
overall and individual sounds and their attributes. Eventually, by the time they get 
into a recording studio, looking for the right sounds to represent the product of their 
mind and having to deal with all the technical issues they might have forgotten what 
their music felt and sounded like and the piece of work is lost. 
By providing an increased number of communication platforms between our different 
"outlets" such as our voice, hands etc, i.e. organs and body parts that can communicate 
to other people, I believe that we are getting a step closer to breaking down the 
barriers that limit our power to create art. The ultimate goal would be to produce 
music directly from our thoughts, e.g. by measuring and translating our brain waves. 
To some extent, this is possible already. The concept of using BGM (Brain Generated 
Music)  is to set up a device on the user's head that reads their brain waves and 
computes music accordingly, playing it back to the user in real time. Thus, the user 
is listening to music that is indirectly produced by their brainwaves. It has been 
reported that to experience that has an extremely comforting effect and even can 
significantly decrease blood pressure. Hence, when used regularly, it can have a 
therapeutic effect. However, if the same user later listens to a playback of their 
BGM that was recorded during the session, they no longer experience that music the 
way they did when they were creating it. This effect only occurs due to the music and 
the brainwaves being in a harmonic rhythm and intensity.  However, this does not yet 
provide the opportunity to directly record music from a person's thoughts, although 
it might be an advancement into that direction.


Why the internet?

People spend more and more time on the internet these days, and it seems that the 
net will be one the main means of communication and of handling daily life quite 
soon. It is already common to make reservations, purchase tickets, order books and 
look for the best holidays online. Many even order their food and their shopping over 
the internet. Young people spend hours online looking for music, games and other 
activities. Making my application accessible online just seems to be logic in face 
of all those developments since the nineties. 
If the prototype were to be simply on a CD, for instance, the number of people who 
might try it out would be limited to the number of people I would personally give the 
CD to. If it is online, however, it can be accessed by anybody looking for anything 
related to it. Furthermore, the application I am planning to make as part of my 
dissertation is a fairly basic one that can be extended and tailored to suit specific 
needs. If I had the working prototype online on my own website, I could try to secure 
some freelance work by contacting other websites that have a music related content. 
For instance, a popular night club might benefit from having a similar application 
on their website, allowing clubbers who frequent the venue to draw their own tunes. 
The interface and the sounds of the music could be adjusted to suit that particular 
user group by making the application fit into the dance scene, all by simply e-mailing 
my URL to their website.


Technology

The application I intend to make aims to build a bridge between our visual perception 
and our ears by transforming graphical data into sounds. The technology I am going to 
use for this work is Java. This is due to several reasons: Java is an extremely flexible 
language that is already extensively and increasingly used both for complex and fully 
automated back-end computations in business and finance and also multimedia applications 
like games, animations and sounds. Furthermore, it is compatible across different 
platforms and systems and it is free. I am hoping to be working on multimedia with Java 
after this course, so this dissertation is a good preparation and an interesting 
challenge for that.  Although I did not yet have the time necessary to do  a lot of 
research on this topic, I know that Java has the tools needed to synthesise sounds in 
real time. Areas I will study include the Java Media Framework, JavaSound, JSound and 
Java 3D, along with other technologies that my research will point me to, such as Pulsar 
synthesis. One website that holds links to several music applications that have been 
created using Java is:

http://shoko.calarts.edu/~tre/JavaMusic.html
Another challenge is making this application run online without the user having to 
download any plug-ins etc, hence I will try to make all of the prototype's 
functionalities available as a Java Applet that will execute on the user's computer. 


Research

My research will focus on three main areas: Java technology, synthesis tools, and 
music applications that are currently available. Looking at the latter will help me 
make decisions about the functionality of the applet. I need to be clear about many 
aspects concerning the functionality. For example, will I let users draw something on 
a static panel and when they want to hear their drawing provide a "Play" button for 
them to listen back to what they created or will a specified number of seconds loop 
the music indefinitely with the user simply changing their drawing any time they choose? 
This decision might affect the complexity of the piece, as  I presume that when users 
use the first option, a static panel, they will put more effort into it and pay more 
attention to the detail.  If the second option is given, they might try out different 
things more randomly, as the continuous loop might make attention to detail slightly 
more redundant.  
	Furthermore, how many and which options to change sounds will or can I provide
 them with? I could make the vertical position (the height along the y-axis on the panel) 
of the drawn object decide on the pitch, the horizontal position, i.e. how far to the 
right or the left something is affect the brightness of the sound, or the choice of 
instrument. The user's choice of colour could affect different filters of sound, for 
instance, to provide effects on the sounds. Another option is to have a 3D object like 
a cube in the corner of the panel, that, when moved, can act like a pitch-bend function 
to the part of the drawing that is currently selected, or it can affect its filters etc.
Can I provide a pool of instruments or sounds to choose from? This could be very useful 
if I were to develop a similar applet as freelance work to clients with particular 
requirements.  I also need to find out how I can let users add beats and rhythm to 
their piece, possibly by overlaying an optional grid onto the background to enable them 
to place events in the right position to get a regularly occurring sound. Is it possible 
to have a built-in option of saving users' work and allowing them access to it when they 
return to alter it next time they visit the site? I need to study the limitations of 
using a Java Applet in this way. The other part of studying the functionality is to 
decide on the design and layout of the interface, e.g. how many options of altering and 
producing I can include, do I include them as buttons, as 3D objects, as menus etc.
	The research into the Java technology will inform me about synthesis tools that 
are available, how they can be used and optimised. For example, using Csound, a music 
synthesis tool, one has to write and save two different files, a .orc file and a .sco 
file, both containing code written in a certain syntax. Then, Csound will render those 
files and save them as a .wav audio file. To open it, the user has to browse to the 
location of that file and play it back. Csound is therefore not at all suitable for 
this work, as I need a tool that will synthesise and play back sounds in real time, 
also allowing for alterations as it is playing back. I need to explore which methods 
are available and how they do or do not suit my needs to decide which one to use. 


please e-mail comments to: SusanKnabe@hotmail.com