Using the MNIST database of handwritten digits MNIST database – Wikipedia a convolutional neural network was trained to an accuracy of 90%. This took 50 epochs.

The trained model was loaded into the Image Classifier control and used to identify handwritten digits.

The files required to reproduce this demo are available here https://drive.google.com/file/d/1XKSYvJfAW1maNsaiV0iaWZXor0Tbtuat/view?usp=share_link

This release will include AI functionality. This will include a new tool which will allow the user to design a Convolutional Neural Network, to train and test this network and to save the network at any stage.

There will be a new control an Image Classifier that will use trained models to classify images. There are enhancements to existing controls to support the preparation of training data and using the classifier in the whiteboard.

The user should have a broad understanding of Convolutional Neural Network structures, but unlike other scripting tools is not required to understand the mathematics that underpin this technology. The user is not required to write any code or script. Every part of the process, from preparing the training data to deploying the network, is performed graphically using the Imaging Whiteboard and the CNN Configuration tool.

I am currently in the final stages of testing and documentation.

Here is a screen shot of the CNN Configurator taken during training.

The new blob counter control in the Imaging Whiteboard (2.5.7) can be used for more advanced image analysis algorithms.

Here we see an image of M&Ms and we want to know how many blue ones are visible. The threshold control is used to separate the blue component of the image. The morphology controls are used to filter out spurious noise and partially visible M&Ms. The blob counter will identify the blobs and allow the user to select the blobs or interest. The selected blobs count is the answer.

Version 2.5.7 includes new image analysis controls including a corner detector. This control implements the Harris corner detector algorithm, described here Harris corner detector – Wikipedia

Here we can see the traditional test image Lenna with significant image features identified.

A new Blob Counter control has been added to the Imaging Whiteboard. This control will allow the user to identify and count blobs within an image.

A live image will be displayed with the total number of blobs displayed dynamically.

Freezing the image will allow the user to select individual blobs which will be identified by outline and ID in the display image.

The Game of Life algorithm is described here: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life

This control will allow the game of Life to be run on an input image or test pattern. This is an example of emergence https://theconversation.com/emergence-the-remarkable-simplicity-of-complexity-30973

The following sequence shows successive iterations gaining in complexity. The first iteration where live cells exist on the edges of the seed image is predictable, subsequent iterations are not predictable (although they are reproducible). This sequence will run for more than 2000 iterations before becoming stable.

Chequerboard used to seed Game of Life
Iteration 1
Iteration 10
Iteration 50

Here we can see the results of two methods applied to the same image shown on the monitor simultaneously. The split screen feature will be available in version 2.5 of the Imaging Whiteboard.

Noise is added to the image and the set memory control will write the noisy image to the secondary memory. The temporal filter is applied to the primary memory. Swap memory switches the primary and secondary memories. The 3×3 median filter is applied. The monitor shows the primary image (morphology result) on the left, and the secondary (temporal filter) on the right.

White noise will contain all frequencies. By applying filters to white noise and viewing the resulting spectrum the effects can be viewed. Here we see the test signal generator producing white noise on 2 channels and the resulting spectrum. The high pass filter is applied to the signal and the resulting spectrum with low frequencies eliminated is shown. The low pass filter is then applied eliminating the high frequencies.

Tracking a target in a raw image may not provide the best tracking performance in all cases. Often tracking a target in a processed image is better. The pre-processing may be edge detection such as a Sobel filter (shown here), or setting a threshold, noise filtering etc.

Using the Set Memory control the original image is saved to memory. The pre-processing (convolution) is performed on the main pixels. The target is tracked in the processed image, but the crop that is passed to the next control is extracted from the image in memory (if it exists). The type of pre-processing will vary depending on the video used.

The latest version (2.3.2) released on this web site (not in the MS store yet) has a new “Low Frequencies Center” option added to the FFT filter control. This option is more convenient for common FFT filters.

Low Pass FFT filter

By including the low frequencies in the center of the FFT mask window a low pass filter can be implemented.

High pass FFT filter

By excluding the same low frequencies a high pass filter can be implemented.

Keeping the resolution low (256×256 in this case) will provide a reasonable live experience; still not quite real-time.