The object, in inches. The parameter, marker , is

The Python source code below in Figures 1.5(a)-(d) shows our initial object detection algorithm to detect a green ball. Each of the three video source will use their own instance of this object detection algorithm.

Since each of the cameras will be placed in a triangle, each of the cameras will use unique parameters to detect a green ball. This is explained further in the next section, Object Localization Triangulation Algorithm. In the final implementation of the algorithm, we plan to detect a wider array of objects. For this initial implementation, we designed our algorithm to only detect a green ball. Our detection algorithm supports movement of a green ball in the X, Y, and Z planes. Figure 1.4(a) shows how we defined parameters for one camera.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

 Figure 1.4(b), Figure 1.4(c), and Figure 1.

4(d) shows how we used the defined parameters to detect a green ball. Multiple parameters need to be defined for each camera. Figure 1.5(a) shows these parameters. The parameter,  KNOWN_DISTANCE, is used to define the distance away from the camera, in inches, that the object will be detected.

The parameter,  KNOWN_WIDTH, is used to define the approximate width of the object, in inches. The parameter, marker , is used to define the detected object’s region/area that will be bounded by a box. The parameter,  focalLength,  is then calculated to determine the optimal depth to which the algorithm will detect the object. The parameters, greenLower and greenUpper, are used to define the range of green colors on the HSV spectrum to detect. The variable, counter, will be used to keep track of how many frames the algorithm has computed. The variables dX, dY, and dZ will be used to store the difference between the X-coordinate, Y-coordinate, and Z-coordinate of the object in the current frame and the X-coordinate, Y-coordinate, and Z-coordinate of the object in a previously calculated frame. The variable, direction, is computed to store the current direction that the object is moving in. In the next few lines of code, we will define the video source for the algorithm.

This video source will be supplied by the code previously discussed in Video Source Data Collection. After defining the initial parameters and the video source, we supply these parameters to OpenCV algorithms. Figure 1.5(b) below shows how we defined more parameters using OpenCV functions.

The first few lines of code make sure a video was supplied to the algorithm before continuing. We then use OpenCV functions to apply a Gaussian blur to the frame in order to smooth the image, reduce noise, and convert it to the HSV color spectrum. Then we use OpenCV functions to construct a “mask” for the color green and perform a series of dilations and erosions to get rid of any small discrepancies in the mask. Finally, we contour the mask’s outline