Skip to content

Template 2: Hand detection

In this template you have access to the library ml5.js. It is based on the library tensorflow.js and it is used to use machine learning models inside the browser. In this case, we are going to use the handpose model to detect the hands in the live-video. You can read more about it on ml5 website.

The result will be something like this:


Download Patch example

What should your patch include?

You patch must have some receivers using the L- and R- preffixes to receive the data from the hands followed by the number of the position you want, following the image below:

Download Patch example

For example, to receive the data from where is the INDEX_FINGER_TIP for the LEFT hand you must use the object [r L-8] in your patch.


From these receivers, you will get two numbers, x and y, inside a list. These numbers represent the position of your hand relative to your real-time video.

  • X Position: 0 is the top and 1 is the bottom.
  • Y Position: 0 is the left and 1 is the right.

In summary, [x, y] defines the hand's position within a coordinate system where (0,0) is the top-left corner and (1,1) is the bottom-right corner of the image.


If you download the example patch and use the information above, you'll see that we can use the following object to detect if your finger is in the top-left corner of the image:

[expr if($f1 < 0.2 && $f2 < 0.2, 1, 0)]

In this expression: - $f1 represents the x coordinate (vertical position). - $f2 represents the y coordinate (horizontal position).

This expression checks if: - $f1 is less than 0.2 (meaning your hand is near the top of the image). - $f2 is less than 0.2 (meaning your hand is near the left side of the image).

If both conditions are true, the output will be 1 (indicating your finger is in the top-left corner). Otherwise, the output will be 0.