(test with this video: http://youtu.be/4G60hM1W_mk)
The idea is to get the frequency of the volume peak
import ddf.minim.*; import ddf.minim.analysis.*; Minim minim; AudioInput in; FFT fft; String note;// name of the note int n;//int value midi note color c;//color float hertz;//frequency in hertz float midi;//float midi note int noteNumber;//variable for the midi note int sampleRate= 44100;//sapleRate of 44100 float [] max= new float [sampleRate/2];//array that contains the half of the sampleRate size, because FFT only reads the half of the sampleRate frequency. This array will be filled with amplitude values. float maximum;//the maximum amplitude of the max array float frequency;//the frequency in hertz void setup() { size(400, 200); minim = new Minim(this); minim.debugOn(); in = minim.getLineIn(Minim.MONO, 4096, sampleRate); fft = new FFT(in.left.size(), sampleRate); } void draw() { background(0);//black BG findNote(); //find note function textSize(50); //size of the text text (frequency-6+" hz", 50, 80);//display the frequency in hertz pushStyle(); fill(c); text ("note "+note, 50, 150);//display the note name popStyle(); } void findNote() { fft.forward(in.left); for (int f=0;f<sampleRate/2;f++) { //analyses the amplitude of each frequency analysed, between 0 and 22050 hertz max[f]=fft.getFreq(float(f)); //each index is correspondent to a frequency and contains the amplitude value } maximum=max(max);//get the maximum value of the max array in order to find the peak of volume for (int i=0; i<max.length; i++) {// read each frequency in order to compare with the peak of volume if (max[i] == maximum) {//if the value is equal to the amplitude of the peak, get the index of the array, which corresponds to the frequency frequency= i; } } midi= 69+12*(log((frequency-6)/440));// formula that transform frequency to midi numbers n= int (midi);//cast to int //the octave have 12 tones and semitones. So, if we get a modulo of 12, we get the note names independently of the frequency if (n%12==9) { note = ("a"); c = color (255, 0, 0); } if (n%12==10) { note = ("a#"); c = color (255, 0, 80); } if (n%12==11) { note = ("b"); c = color (255, 0, 150); } if (n%12==0) { note = ("c"); c = color (200, 0, 255); } if (n%12==1) { note = ("c#"); c = color (100, 0, 255); } if (n%12==2) { note = ("d"); c = color (0, 0, 255); } if (n%12==3) { note = ("d#"); c = color (0, 50, 255); } if (n%12==4) { note = ("e"); c = color (0, 150, 255); } if (n%12==5) { note = ("f"); c = color (0, 255, 255); } if (n%12==6) { note = ("f#"); c = color (0, 255, 0); } if (n%12==7) { note = ("g"); c = color (255, 255, 0); } if (n%12==8) { note = ("g#"); c = color (255, 150, 0); } } void stop() { // always close Minim audio classes when you are done with them in.close(); minim.stop(); super.stop(); }
When I play an audio clip on my computer and check it on tuning apps on my phone, the frequencies correspond to the numbers listed here: http://www.phys.unsw.edu.au/jw/notes.html, but when I do the same text with this app, the numbers are wildly different. Can you help me understand why?
ReplyDeleteAs I answered on twitter, I used the formula contained in that site. The formula is m=12*log2(fm/440Hz)+69. The algorithm I made is midi= 69+12*(log((frequency-6)/440)). As you can see, I subtracted -6 of the frequency. The value -6 appeared when I tested my algorithm with a reliable reference pitch of 440Hz (note A) and the value returned was 446, so I kinda calibrated the system. This algorithm worked fine in two absolutely different computers (mine and of a friend), so I assumed this distortion of +6 was a pattern of all microphones. Maybe I was wrong.
DeleteWhat you can try is to remove the -6 and test the system with a reliable reference pitch like a guitar tuner, and then remove the distortion of the value.
helped me much with my own project - thank you for share
ReplyDeleteCool and I have a keen supply: Does Renovation Increase House Value contractors near me for home renovation
ReplyDelete