Omni-Directional Mobile Robot Control using Raspberry Pi and Jetson Nano

This paper describes a process of designing a low-cost robot that moves omni-directionally that capable of detecting simple objects around it while moving. The robot uses three DC motors with encoder as feedback. The overall movement control is done by a Raspberry Pi that is embedded into the robot. It also uses ROS (Robotic Operating System) as the framework. To detect certain objects, the robot is equipped with an infrared sensor for measuring the object distance, and a camera for capturing the image of the object that will be processed by a Nvidia Jetson Nano. By using inverse kinematics and odometry calculations, the robot can move smoothly with error at about 9.5% on the x-axis and 8.1% on the y-axis, measured at the robot's final position. The robot can detect objects using infrared sensors reliably with error at about 0.87%, however, it cannot measure the object size precisely due to various factors.


Introduction
Robot design and programming is a core subject in engineering that has rich aspects to be explored, especially for educational purposes. By self-developing robust but low-cost robots, students and researchers can sharpen their knowledges while exploring various novelty aspects in robotics. This is in line with one of the opinions of the Team ABI Research: "Flexibility and efficiency have become the main differentiators in a system, in order to cope with volatile product demand, seasonal peaks, and rising consumer shipping expectations" [1].
Seeing that flexibility and efficiency are important components in mobile robotics, a robot needs to be designed with capability of moving easily to specific locations with wider scope of space. For this reason, the robot needs to have good mobility and maneuverability, which depends on its wheel configuration.
One disadvantage of conventional wheels is that they are limited in motion; thus, the robot cannot move sideways without preliminary maneuvers. This problem can be overcome with special omni-directional type wheels that allow the robot to be driven in all directions.
In this paper, the process of building such a mobile robot is described. In the future, this robot can also be equipped with a robotic arm to create a mobile manipulator. For this purpose, the robot needs to know the distance of the object from the robot, along with the height and width information of the object so that the manipulator arm can grip object correctly. This paper is organized as follows. In section 2, the design of the omni-directional mobile robot is described. In section 3, we evaluate the performance of the robot. Finally, in section 4, we conclude our work.

Research Methods
The following figure shows the logical connection between core components of our mobile robot.
As shown in Figure 1, the Raspberry Pi acts as a controller that will receive input from the user to determine the destination position and direction of motion of the robot. The Raspberry Pi will send data to drive a DC motor, and the encoder will read the wheel's speed and make it as feedback so that the Raspberry Pi can determine the robot's mileage.
Infrared sensor input is used to detect the distance from the robot and the destination objects. The output from the camera will be processed by Jetson Nano. Camera image processing is carried out at Jetson Nano to read the height and width of the detected object, then the reading data will be sent to the Raspberry Pi. Both microcontrollers (Raspberry Pi and Jetson Nano) are connected on the same network and use ROS for communication between controllers.  Figure 2 shows the basic inverse kinematics calculation of an omni-directional 3-wheel robot system [2]. And the final equation can be written as follows: (1) With the description of several variables according to this study: α1 is the angle between the x-axis and wheel 1, 270 O α2 is the angle between the x axis and wheel 2, 30 O α3 is the angle between the x axis and wheel 3 ,150 O R is the distance of the wheel to the center point, which is 0.12 m. r is the radius of the wheel, which is 0.0246 m θ is the initial angle of the robot, which is 0 This equation is implemented in the program to determine each rotation angle needed to input the intended x, y, θ position [3]. The units x and y are in meters, while θ is in radians. Units ϕ1, ϕ2, ϕ3 in radians.   To regulate the movement of the robot, the encoder acts as feedback to estimate the movement of the robot. So once the robot has reached the target position, then the motor will immediately stop.
To detect the distance between the robot and the object , it is used an infrared sensor. The SHARP GP2Y0A21 sensor used can read a range of 10 -80 cm. Calibration is performed to determine the conversion equation from the sensor, the calibration process is done using a black rectangular image measuring 12 cm x 12 cm. The regression equation is: Where 'x' is the reading number from the sensor and 'y' is the reading value of the distance of the sensor to the object in centimeters.
To detect the size of the object in front of the camera, an object reading program is required by the camera. The camera used is the Raspberry v2 camera module which will be processed using openCV. To calculate the size of an object that is read, it needs to determine the ratio between the pixels read in the image and the distance of the camera to the object [4]. First it should be 'calibrated' with a reference object first. The reference object must be known for its original size.The calibration result is: Where 'x' is the distance that is read and 'y' is a dividing constant to calculate the size of objects. Next for the implementation of the program are: Actual size = pixel distance / constant. The graph shown in Figure 7 shows the difference in motor speed at no load and under load conditions. After obtaining characteristic data from each motor, a regression equation can be made to convert the desired speed numbers to the PWM numbers needed for each motor. Through the table data above, the regression equation is obtained using Microsoft Excel: PWM_a = -4E-10y6 + 2E-07y5 -4E-05y4 + 0.0043y3 -0.2179y2 + 6.0869y -42.92 y = 7E-06v4 -0.0016v3 + 0.1341v2 -4.0937v + 52.613

Motor's characteristic test
Equations (5), (6), (7) are regression equations where v is the value of velocity required for each motor and PWM_a, PWM_b, PWM_c is the PWM value given to the motor.
To be able to reach the speed of each motor more accurately, a proportional controller is used, so the determination of the motor rotational speed is closer to the target. In this system, using proportional controller of 0.7, obtained from testing with trial and error methods. For diagonal robot movements, there is an average error of 9.51% on the x-axis and 8.12% on the y coordinate. All robots are set to reach the target position within 4 seconds, except the target position x = 120, y = 120 and x = -120, y = -120 where the robot's movement time is set at 5 seconds because the motor 3's speed has exceeded the maximum (for 4 seconds). Looking at the data shown in Table 2, if the final position is small (in this test x = ± 40, y = ± 40), it causes an error that tends to be greater, because the required motor speed is below 20 rpm so that proportional control has difficulty reaching that point.  Looking at the responses of the three motors used, several points can be observed as follows: 1. All three motors find it difficult to achieve low speed numbers, especially for speeds below 25 rpm, indicated by the error number from the proportional control response that is of high value. 2. The current control parameter setting, which is 0.7, is more suitable and tends to be more stable at high speeds, especially at speeds above 90 rpm. This is indicated by the error number of the proportional control response which is smaller.

Distance measurement with Infrared sensor
This test is carried out to determine the use of infrared sensors used in reading the distance of an object with a robot. The object used for testing is a rectangular radio with a height of 8.6 cm and a width of 13.4 cm. In previous experiments, objects with a flat surface were used. Then an experiment was carried out using a tubular object whose surface was not flat, dan change the position of the object. The test method remained the same as before. This is to determine the effect of the object's surface dan position on distance reading with an infrared sensor. As the result, the curved surface and the tilted object do affect the reading by infrared sensor, the curved or tilted object potentially has a greater error number.

Object size measurement using pi camera and Jetson Nano
Following the test of distance measurement, object dimension calculation is also performed. For this, the Nvidia Jetson Nano is used for the image processing. Due to the physical constraints, the system cannot detect more than one object and the object being measured must have height above 7.5 cm (height of the sensor to the ground surface).
For testing, we used various mundane objects such as a radio, a food box, a flashlight, a DVD box, a bottle, and a lump of sewing threads (see Figure 11). Here are the measurement results. As a note, the food box measurements have an average error of 12.89% because, at distance of 20cm, the object covers all the camera's reading screens so the size reading is invalid. The average error for the height is 3.77%. For reading the size of a round flashlight, the average number of errors is 15.11% and 28.9%, because at distance of 20 cm and 30 cm, the size of the object is too large for the camera reading. Measurement using a DVD box is only effective at distances greater than 50 cm, because at closer distances, the camera screen is not enough to capture the entire object. Measurement of the size of the balsam bottle object cannot be carried out, because the height of the object is only 3.6 cm, while the placement of the sensor is 7 cm from the ground surface. So it cannot be detected by an infrared sensor, causing the absence of distance readings and calculations for measurements with the camera cannot be processed correctly. From the above experiments, we observe the following: • The system cannot detect objects with the same background color as the object. • The shape of objects that can be measured is square, rectangular, round, tube. Beyond this form, the reading becomes less accurate. • To be able to read its size, objects must all enter the camera reading frame. If it is too close to the camera, it might produce invalid readings. Figure 12 shows the sketch and the physical appearance of our robot.

Conclusion
This paper describes the process of designing a 3-wheel omni-directional mobile robot that is controlled using a Raspberry Pi for its direction and can detect simple objects in front of it. Inverse kinematics was used to determine the direction and speed of each motor, and the odometry was used to estimate the movement of the robot's position. From these two calculations, the robot can move and stop at the position inputted by the user with an error rate of 9.51% on the x-axis and 8.12% on the y-axis for the robot's final position. This is influenced by the speed of the robot to reach that point, the proportional parameters used, and the distance between the final position and the initial position. Reading the distance of an object using an infrared sensor, we got an average error of 0.87%. The factors that influence this reading accuracy are the angle of placement of the object to the sensor and the surface of the object being measured. For the object detection, the OpenCV library was used that utilizes the color contrast of objects against the background to detect an object. From the results of testing the size of objects using 6 objects of different sizes, this method has the ability to read object sizes in front of the robot with an average error of 30.02% for length readings and 41.8% for object height readings. This detection accuracy was influenced by object size, object distance from the camera, and object detection methods used. For future work, we plan to combine this omni-directional robot platform with a manipulator to create an adaptive mobile manipulator.