iPhone Ne Zaman Türkiye’de?

Yandaki resimde görünen ülkeler, yeni iPhone 3G’nin bu yıl yani 2009’a kadar, piyasaya çıkması öngörülen ülkeleri gösteriyor. Türkiye’de bu ülkeler arasında. Vodafone tarafından Türkiye’ye getirileceği söyleniyor.

Resim Referansı: (EPA/JOHN G. MABANGLO)

Advertisements

What refinement has been done to pull the execution time of the motion detection algorithm on J2ME to pull it down a period of reasonable time?

Previously posted on old Pusu blog:

28 Mar ’05 – 22:32 by Cahit Güngör

As mentioned before the motion detection algorithm runs over 18 seconds on j2me platform only to compare two images. It is not a reasonable time if an application is to be run real-time. It is urgent decision to be made to leave or to stay on development. If this time interval can not be pulled down, the project will be senseless. So the motion detection algorithm is taken under focus to refine it. At last we have done it. Now it is running under 100 ms which is a reasonable time to detect motion when compared with 18 seconds. This an obviously great achievement. 🙂

What is done?
Float class is discarded and the difference is calculated more simple way instead of square root.
int array of RGB data is used instead of byte array RGB data which saves lots of converting time.
The most considerable difference is made when only one color from RGB data is used instead of 3 colors.

The following question if the motion detection algorithm work in this trimmed form.

Motion Detection Algorithm On Mobile Phone (J2ME)

Previously posted on old Pusu blog:

27 Mar ’05 – 22:20 by
Cahit Güngör

When the time comes to embed the code into a real (J2ME) mobile device there is some obstacles that are have to be defeated.

1) The shots that are taken from getSnaphot returns a byte array filled with the formatted data in relation with the format of the getSnaphot operation. But the motion detection algorithm relies only on RGB format data. There is not any Format class in J2ME as in JMF which enables coder to use lots of image utilities which in J2ME we lack. The solution to this converting problem comes with Image class which let us to gain the RGB region in the image which is created form byte array in different formats.

2) The motion detection algorithm uses byte array of RGB image data to process. But in here we have an int array of RGB image format data. This seems an easy obstacle to surpass, but it will give birth to great problems in the following steps which we could not anticipate that time.

3) When the difference of the pixels is calculated, the motion detection algorithm does square root operation which is not supplied in J2ME. We overpass this problem by using Nikolay Klimchuk‘s Float class for J2ME.

The question is what will be the performance when these changes made on motion detection algorithm to adapt to j2ME and run on mobile device.

The test results on Nokia 6630 are very disappointing, although this mobile has one of the fastest processor in the market. The algorithm runs more than several minutes even the screen saver is activated. We start to refine the other code segments and find out the breaking point of the algorithm in difference calculations which does shift and mask operations (bitwise operations) to calculate difference and the Float section. However did we try to refine it, the breaking point section at least run over 18 seconds.

How the motion detection algorithm works?

Previously posted on old Pusu blog:

26 Mar ’05 – 22:06 by  Cahit Güngör

Pusu, does its job in J2ME environment by a motion detection algorithm. Briefly, motion detection algorithm based on frame difference calculation.

After reference image has been shot, the algorithm runs until a motion is detected.

Algorithm has two images to identify whether there is a significant change that can be considered as a motion in the projected vision or not.

The calculation is done in RGB format data of the images in byte arrays.

Initially two images are  processed to see if there is an overall change in the projected vision. This overall change must not effect the image difference calculation since this change might be caused by an environmental effect such as illumination difference, rather than a motion. Overall change effects are stored to be used in pixel difference calculation which is a part of the motion detection algorithm. The overall difference will be called correction hereafter in this text.

Two byte arrays are compared to see if there is considerable change in the pixels. Considerable change is determined by a threshold value which is called pixel threshold. If the difference between two pixels is greater than the pixel threshold, it is then compared with the correction value. Afterwards, according to the result the pixel is labeled as black if it has changed, otherwise the pixel is labeled as white. Now the different pixels can be seen in the real-time image.

The new image which keeps a view of difference is our reference. This image is processed to catch a motion of an entire body rather than individual minor pixels. This process of the motion detection algorithm is called blob calculation.

Blob calculation calculates the entire body of black blob size. This calculation is done by finding the radius of a blob. This radius of the blob is compared with another threshold value which is called blob threshold. This value is determined according to what sensitivity is wanted to be achieved on the motion detection system. In our motion detection system, this sensitivity is defined by the user of the mobile phone and passed as an argument to the algorithm. For example, if the user has a cat, and s/he doesn’t want to be alarmed by its motions, the blob threshold should be assigned accordingly.

Whenever a blob with a greater radius than blob threshold, the algorithm finishes the process telling the upper layer there is a motion in the projected area.

four comments, already:

Any existing prototype/proof of concepts of this exist yet ?
– Vikram

Vikram – 06 April ’05 – 14:02

If you are asking about the full application, the answer is yes. Actually we are testing it, and achieves satisfactory results. “Now Application Detects Motion” log samples some test. (They are not very scientific, sorry :). The images that reflects motions will be send to a database and the results will be published via web. We are planning to finish that reporting side in a week, it will be announced in this blog either.
Thank you very much for your attention…
Cahit G�ng�r

Cahit Güngör (
– 06 April ’05 – 23:22

i m unable to understand correction value and pixel threshold..can u elaborate? and also are u doing the calculation on mobile itself or on some server.

abhinav (
17 June ’05 – 10:28

First of all we aren’t doing any process out of the mobile. Every calculation has been done on the mobile phone.

The purpose of the correction value is to clean out the general differences in two images. For example if the overall light has been changed in the second image; you have to eliminate this difference. This is what correction value is for.

Pixel threshold is a simple threshold value that shows if there is a significant change in two corresponding pixels.

Cahit Gungor – 02 August ’05 – 15:52