Welcome to Arthur Asuncion's Digital Signal Processing page. This site has been created for ICS 180, which is taught by Professor Christopher Dobrian.


March 21: Final Project Deliverables: Signal Processing Applications of Wavelets & AWE Java Program

My final research paper is posted as a pdf file. Here is my zipped up code (source files only, uncompiled). In order to compile the source files, use Java1.5, since I developed A.W.E. (Art's Wavelet Effects) in Java 1.5. If you would like to have the Java executables only, I have zipped up the Java class files as well for convenience. Again, use Java 1.5 to run these class files to ensure compatibility.

My code set includes 10 classes:

Here is a brief overview of functionality:

To run A.W.E., simply type "java SignalControl" in the command-line prompt (make sure you are in the directory of the compiled classes). Enjoy!

Update [March 21, 1:21 PM]: There is a small correction to the Sound.java file. This correction allows the correct visualization on Canvas 2 to be shown (the coefficients, rather than the reconstructed signal) when playing stereo files. Note that this doesn't affect the program at all when the program is playing mono files.

Update [March 23, 6:19 PM]: Here are the slides that I presented today in class. If you have any questions about about my program or about wavelets in general, feel free to email me. I enjoyed all the presentations today.


February 27: Project Update: Signal Processing Applications to Wavelets

My project focuses on wavelets and their applications to digital signal processing. I have started to write a rough draft of the research paper. This rough draft includes the abstract, references, and text which describes the Haar wavelet transform. I still have to write about the different applications of wavelets. Also, I am planning to write about the Daubechies wavelet transform if I have time.

I have implemented many different functions relating to the Haar wavelet in Java:

The Java source and class files are in Wavelet.zip. In order to test the Haar-related functionality, simply extract the files in Wavelet.zip, open a command-line interface, navigate to the directory where the files are extracted (like Haar.class), and type java Haar in the command-line interface. This program will print out an 8-sample signal along with its Haar wavelet transform.

I have also created an application which computes the Haar wavelet transform and inverse transform on a sound signal in realtime and plays the sound on the computer speakers. Simply type java SignalControl in the same command-line interface. A user interface will be shown. Click on the Load button and select a 16-bit WAV file (this is the only format that is supported). After loading the WAV file, press the Start button. The program will then transform the signal into wavelet coefficients, process these coefficients, and then transform the coefficients back to the normal time-amplitude format before playing the sound. If the WAV file is 1 channel (Mono), a compression simulation effect will be heard -- the signal will be compressed by a factor of 16. If the WAV file is 2 channels (Stereo), the low frequency coefficients will be thresholded and only the noisy part (the high frequencies) will be heard. For now, I have just hardcoded this functionality into the program. Later on, I plan to allow the manipulation of these effects through the user interface. Also, I have to implement the wavelet coefficient display; right now, the third frame of the user interface is blank.

In the zip file, I have also included two sample WAV files. LonelyUniverseIntro.wav is a stereo WAV file, while testReasonLoop.wav is a mono WAV file.

Deliverables:


February 14: Max/MSP Assignment on Sound Spatialization

I have created a patch that simulates the effect of Doppler shift. A moving sound source travels from left to right at a constant speed, while the listener stands a certain distance away from the source's line of travel. The primary sound is a siren that I generated using a technique similar to Doppler shift. One can also use sound files or the audio-in signal to test this Doppler shift effect.

I used the Pythagorean theorem to calculate the distances between the source and each of the listener's ears (with each ear getting its own calculation). I then used these distances, along with the speed of sound, to calculate the delay time for each ear. Since the distances are continuously changing, the delay is also continuously changing, and this gives rise to the Doppler shift. The sound becomes transposed to a higher pitch when the source is travelling towards the listener and to a lower pitch when the source is leaving the listener.

I used the varying delay time as my primary technique for sound spatialization. Another technique I used was continuously varying amplitude. One assumption that I made was that a siren sound can be heard within 500 meters. I made the amplitude inversely proportional to the distance between the source and the listener. Since the difference in distance between source-left ear and source-right ear was too miniscule to contribute to a noticeable amplitude change between left and right ear, I also scaled the amplitude again for panning reasons (so that the ear that is closer to the source would notice a higher amplitude than the ear that is further away). This additional amplitude-scaling is justifiable, since in real life, the closer ear has a better 'view' of the sound signal, while the farther ear is potentally being blocked by the listener's head! Thus, the closer ear should notice a considerably higher amplitude, especially in this specific left-to-right simulation.

I also used a lowpass filter with a cutoff frequency that becomes lower as the sound source recedes from the listener. I used the lowpass filter since I read in the book (and I heard in class) that the high frequencies cannot be heard as well as the low frequencies at greater distances. However, I am not sure that the lowpass filter contributes anything significant to my patch (maybe my parameters are not fine tuned enough).

Here is my patch as a text file (with many more comments and detailed user notes):


February 2: Proposal for Final Research Project

There are several topics that I am currently thinking of pursuing: wavelets, pitch detection, or probabilistic composition.

Wavelets are used in spectrum analysis as an alternative to FFT (Fast Fourier Transform). I would like to research the advantages and disadvantages of using wavelet transformations over FFT. Furthermore, I would like to implement both WT (wavelet transformation) and FFT to compare the two approaches. I think that wavelets can be implemented in Max/MSP or in a standard programming language like C or Java. Curtis Roads gives a good overview of wavelets in Ch. 13 of his Computer Music textbook. Furthermore, I have done some initial browsing of web sites that are devoted to wavelets. After browsing through some sites, I have noticed that the topic of wavelets seems to require a lot of mathematical theory, so I am not yet sure of the feasibility of this proposal. Therefore, I am also considering other topics for my final research project.

Pitch detection is a more manageable topic that I might pursue. Curtis Roads gives a good treatise of this subject in Ch. 12 of his textbook. I would eventually like to implement a program that converts a wav file into a MIDI file. Handling multiple notes (polyphony) might be difficult. However, I have thought up of a non-realtime recursive approach to detecting multiple notes. Here is the algorithm that I have thought up (which has probably already been invented already): 1) Find the most obvious frequency in the sound. This is your first note. 2) Subtract this frequency from the original sound. And then repeat steps 1 and 2 for several iterations to get all of the notes that are in the sound.

The last topic that interests me is algorithmic composition using probabilistic models. Curtis Roads treats this subject in Ch. 18. I am familiar with Markov models and various probabilistic distributions, and I think that it would be very interesting to create a program that intelligently makes music. I am not sure if this topic is within the scope of the class, since it seems that there is no real Digital Signal Processing that is happening within the algorithmic composition of music. If I do this topic, perhaps I could extend my G.I.G. Java applet to be truly intelligent. There could be a Markov model that gives probabilities for each transition between chords. I must note that I am not too strong on music theory, so I would need to research the probabilistic distributions of chord transitions and progressions.

If I have enough time, I would like to integrate all of these topics into one cohesive framework. Perhaps wavelets could be used to produce a more accurate pitch detection tool, and then this pitch detection tool could influence an artificially intelligent algorithmic composition system in real time. However, a combination of these topics does not seem like a feasible project to do within 6 weeks. Hence, I will most likely have to narrow my project to just one choice.


Assignment for Wednesday, January 19:

For this assignment, I have developed a C program called AMP, which stands for Art's Multi-effects Processor. Not only is AMP a real-time digital signal processor with three included effects (distortion, tremolo, and delay), but it is also an extensible framework that can be used to quickly create a new digital effect. AMP also dynamically facilitates both the creation of effects chains and the modification of effect parameters in real time.

AMP supports three basic operations: adding an effect to the effects chain, removing an effect from the effects chain, and modifying the parameters for each effect. In order to dynamically add and remove effects from the effects chain, AMP stores effect function pointers in an array and selects (from this array) the desired effect function(s) to run, based on the user's input. Browse the amp.c code for more implementation details.

Different orderings of the effects in the effects chain can be utilized to process the signal in different ways. For instance, an effects chain configuration like Input --> tremolo --> delay --> Output would produce a combined effect that is different from a configuration like Input --> delay --> tremolo --> Output. Sometimes the differences are very subtle.

AMP requires the standard PortAudio files. To use AMP on Windows, one would need to link to winmm.lib to compile.


DSPage Links:

Personal Music Links:


Last Modified March 21, 2005