Previously posted on old Pusu blog:
26 Mar ’05 – 22:06 by Cahit Güngör
Pusu, does its job in J2ME environment by a motion detection algorithm. Briefly, motion detection algorithm based on frame difference calculation.
After reference image has been shot, the algorithm runs until a motion is detected.
Algorithm has two images to identify whether there is a significant change that can be considered as a motion in the projected vision or not.
The calculation is done in RGB format data of the images in byte arrays.
Initially two images are processed to see if there is an overall change in the projected vision. This overall change must not effect the image difference calculation since this change might be caused by an environmental effect such as illumination difference, rather than a motion. Overall change effects are stored to be used in pixel difference calculation which is a part of the motion detection algorithm. The overall difference will be called
correction hereafter in this text.
Two byte arrays are compared to see if there is considerable change in the pixels. Considerable change is determined by a threshold value which is called
pixel threshold. If the difference between two pixels is greater than the pixel threshold, it is then compared with the correction value. Afterwards, according to the result the pixel is labeled as black if it has changed, otherwise the pixel is labeled as white. Now the different pixels can be seen in the real-time image.
The new image which keeps a view of difference is our reference. This image is processed to catch a motion of an entire body rather than individual minor pixels. This process of the motion detection algorithm is called
Blob calculation calculates the entire body of black blob size. This calculation is done by finding the radius of a blob. This radius of the blob is compared with another threshold value which is called
blob threshold. This value is determined according to what sensitivity is wanted to be achieved on the motion detection system. In our motion detection system, this sensitivity is defined by the user of the mobile phone and passed as an argument to the algorithm. For example, if the user has a cat, and s/he doesn’t want to be alarmed by its motions, the blob threshold should be assigned accordingly.
Whenever a blob with a greater radius than
blob threshold, the algorithm finishes the process telling the upper layer there is a motion in the projected area.
four comments, already:
Any existing prototype/proof of concepts of this exist yet ?
Vikram – 06 April ’05 – 14:02
If you are asking about the full application, the answer is yes. Actually we are testing it, and achieves satisfactory results. “Now Application Detects Motion” log samples some test. (They are not very scientific, sorry :). The images that reflects motions will be send to a database and the results will be published via web. We are planning to finish that reporting side in a week, it will be announced in this blog either.
Thank you very much for your attention…
Cahit Güngör (
– 06 April ’05 – 23:22
i m unable to understand correction value and pixel threshold..can u elaborate? and also are u doing the calculation on mobile itself or on some server.
17 June ’05 – 10:28
First of all we aren’t doing any process out of the mobile. Every calculation has been done on the mobile phone.
The purpose of the correction value is to clean out the general differences in two images. For example if the overall light has been changed in the second image; you have to eliminate this difference. This is what correction value is for.
Pixel threshold is a simple threshold value that shows if there is a significant change in two corresponding pixels.
Cahit Gungor – 02 August ’05 – 15:52