Here I perform the same task, i.e. single note (A2), this time from a cello. This is a little more difficult as the cello is a polyphonic instrument with very strong harmonics.

From the signal and FFT result we can see that the first overtone has more energy than the fundamental frequency.

This problem is attenuated in the Multi-rate FFT spectrogram as the overtone is sampled over less time in the shorter sampled higher frequency FFT.

We use the built in profile for the cello.

Here we can see that as the note fades on the last beat the overtone is also transcribed.

Since we know that this was a single note the monophonic check box should be checked.

Now the result is as expected.

Using the built in Convolutional Neural Network for the cello we see leading and trailing silence; only the red line from the spectrogram has been transcribed.

The threshold for silence is calculated differently for the algorithm and the CNN. The algorithm sets the threshold on the fly during transcription based on a running average of the energy in the audio. For the CNN the threshold for silence is implemented when the spectrogram slices are prepared for training; the CNN was never trained on the parts of the note which were too quiet.

The simplest music file to transcribe is a single note. Here I use a bassoon playing a single note A (octave 2) to walk through the simplest transcription.

From the signal and FFT result we can see that this is indeed a single note with a single dominant frequency.

The spectrogram confirms the simplicity of this example.

To transcribe this note we will use the built in bassoon profile and the default options (i.e. correlation).

The result is as expected.

Alternatively we could have elected to use the built in Convolutional Neural Network for the bassoon.

The result is a little different. The note ends just after the forth beat. The CNN transcribes this as a 3 beat note, not 4 beats.

A spectrogram tab has been added to the Audiophile’s Analyzer to give a visual representation of the whole audio file spectrum. https://en.wikipedia.org/wiki/Spectrogram

Spectrogram – 3 octaves x 37 seconds

The spectrogram has time on the horizontal axis, and frequency on the vertical axis. Two options are available for the frequency axis, raw FFT results, and mapped notes (a simple approximation to a constant Q transform). Here the mapped notes are shown.

This functionality is a precursor to applying a convolutional neural network (CNN) to the transcription process. This technique is currently available commercially CNN (lunaverus.com) for piano only. The results are on a par with the algorithmic methods so far implemented. Working in two dimensions has advantages in that attack and decay patterns can be trained for. By allowing the user to use algorithmic transcription, or AI, or both in concert hopefully better accuracy can be achieved. By training models for various instruments and providing the user with the means to train their own models this can be applied more broadly.

Version 1.2 now released supports both MIDI file output from a score transcribed from audio, and MIDI file input for analysis and transcription.

To illustrate the MIDI functionality simply I will use a single note played on a bassoon.

Using the metronome this transcribes as:

Saving to MIDI we get these events:

Opening the saved MIDI file we see:

Scoring produces:

Note the missing rest at the end. The MIDI standard has no way of specifically defining a rest. The Audiophile’s Analyzer will infer rests that occur between notes, but this final rest cannot be inferred.

Now that the Audiophile’s Analyzer is available in the Microsoft App Store I need to profile as many instruments a possible to increase the accuracy and utility of the application.

While the profiling tool is provided with the Audiophile’s Analyzer allowing the user to create their own profiles, it makes more sense for me to create them and incorporate then in future releases.

The tuba cannot be transcribed without a profile as the overtones dominate its spectrum.

The profile created from this and other samples:

Without applying the profile the highest point in the spectrum is selected we already know that the tuba is monophonic). This is C# which is incorrect.

Applying the profile using correlation we correctly identify the low A.

                The Imaging Whiteboard is done. I’ve added every feature I can think of, fixed every bug I know about. 3.0.5 is the final version.

                So, what is the next project?

                Obviously, I asked ChatGPT.

Me:

“My name is Steve Weavers; my personal web site is http://sound-analysis.com/ . I am responsible for The Musicians Workbench and The Imaging Whiteboard. I have just completed the Imaging Whiteboard. I am looking for ideas for my next project. What do you think I should tackle?”

Here is what it said:

“Hi Steve! It’s great to hear that you’ve completed the Imaging Whiteboard. As for your next project, there are many directions you could go in. Here are a few ideas to consider:

Music notation software: If you enjoy working on music-related projects, creating a software tool that makes it easier for musicians to notate their compositions could be a valuable resource for the community.

Audio editing software: Building an audio editing software could be a complex but rewarding project. You could focus on a specific niche, such as podcast editing, or create a comprehensive tool that covers all aspects of audio editing.

Virtual instrument plugin: If you have experience with music production software, developing a virtual instrument plugin could be a fun challenge. This could involve creating realistic-sounding emulations of real instruments, or coming up with entirely new and unique sounds.

Music recommendation engine: With so much music available online, it can be overwhelming for listeners to discover new artists and albums. Creating a recommendation engine that uses machine learning algorithms to suggest music based on a user’s listening history could be a valuable tool for music lovers.

Audio analysis tool: Given your background in sound analysis, creating a tool that helps users analyze audio files could be a natural fit. This could involve developing algorithms to identify key features of the audio, such as tempo, key, and tonality.

I hope these ideas give you some inspiration for your next project!”

Me:

“Those were excellent suggestions. Do you have any ideas for imaging projects?”

“Certainly, here are some ideas for imaging projects you could consider:

Image recognition software: Developing software that uses machine learning algorithms to recognize and identify objects within images could be a valuable tool for a range of industries, from healthcare to security.

Image processing software: Creating software that enhances or alters images could be a fun and creative project. You could focus on specific effects, such as color correction or noise reduction, or create a comprehensive tool that covers a range of editing capabilities.

Virtual reality experiences: With the increasing popularity of virtual reality technology, creating immersive visual experiences could be a rewarding project. This could involve creating 360-degree images or videos, or developing interactive VR games.

Image compression algorithms: Developing algorithms to compress image files without compromising their quality could be a valuable resource for a range of industries, from web development to photography.

Augmented reality applications: Building applications that overlay virtual objects or information onto real-world images could be a fun and innovative project. This could involve creating educational tools or marketing applications for businesses.

I hope these ideas provide some inspiration for your next imaging project!”

That’s brilliant! The Audio analysis tool might be a good challenge. What do you think?

Using the MNIST database of handwritten digits MNIST database – Wikipedia a convolutional neural network was trained to an accuracy of 90%. This took 50 epochs.

The trained model was loaded into the Image Classifier control and used to identify handwritten digits.

The files required to reproduce this demo are available here https://drive.google.com/file/d/1XKSYvJfAW1maNsaiV0iaWZXor0Tbtuat/view?usp=share_link

This release will include AI functionality. This will include a new tool which will allow the user to design a Convolutional Neural Network, to train and test this network and to save the network at any stage.

There will be a new control an Image Classifier that will use trained models to classify images. There are enhancements to existing controls to support the preparation of training data and using the classifier in the whiteboard.

The user should have a broad understanding of Convolutional Neural Network structures, but unlike other scripting tools is not required to understand the mathematics that underpin this technology. The user is not required to write any code or script. Every part of the process, from preparing the training data to deploying the network, is performed graphically using the Imaging Whiteboard and the CNN Configuration tool.

I am currently in the final stages of testing and documentation.

Here is a screen shot of the CNN Configurator taken during training.

The new blob counter control in the Imaging Whiteboard (2.5.7) can be used for more advanced image analysis algorithms.

Here we see an image of M&Ms and we want to know how many blue ones are visible. The threshold control is used to separate the blue component of the image. The morphology controls are used to filter out spurious noise and partially visible M&Ms. The blob counter will identify the blobs and allow the user to select the blobs or interest. The selected blobs count is the answer.

Version 2.5.7 includes new image analysis controls including a corner detector. This control implements the Harris corner detector algorithm, described here Harris corner detector – Wikipedia

Here we can see the traditional test image Lenna with significant image features identified.