STEM Capstone Project

Table of Contents

Abstract

A device was created for the blind to improve their safety on the streets. Both the Time-of-Flight and the ultrasonic sensor were tested for accuracy and reliability. It was concluded that the ultrasonic sensors were more accurate and were chosen as the sensors on the final product. Three of these sensors output into an MP3 player and three vibration motors, attached onto a belt where the user can wear around the chest. After final modifications were made to the algorithm of the device, surveys and product testing were conducted on blind individuals. After testing the product, the device was able to detect objects in the blind person’s blind spot with a minimum error of 7.45% for the ultrasonic sensor, which showed that the device was able to improve safety and increase the reaction time towards different types of obstacles.

Introduction

The safety and wellbeing of citizens remains a major priority for society. This applies to people who are vision impaired as they often have difficulties navigating around their community. Since the prevalence of vision impairment is increasing throughout the past years, many of their conditions are severe and need attention for help. The prevalence of vision impairment across the world increased from 159.9 million in 1990 to 216.6 million in 2015. The Centers for Disease Control and Prevention estimates that the number of persons affected by vision impairment in the United States will increase in the upcoming years, from 2015 to 2050 (Figure 1). Two of the most common eye problems that lead to blindness if not treated are cataract and presbyopia. Presbyopia alone affects over one billion people worldwide, and affects over 11 million in the United States alone.


Figure 1: Increasing Prevalence of Vision Impairment in the United States (Varma, 2016)

The purpose of this project is to apply modern technologies into improving the safety for the blind and vision impaired. Rather than people physically helping the vision impaired all the time, as that can be inefficient and may be troublesome for the helper, autonomous devices are able to accomplish this. Autonomous robots currently exist to help the disabled, but are not utilized to effectively assist the vision impaired. Despite this, development of assistive devices designed with modern technology is possible and should be utilized to improve the safety and wellbeing of the vision impaired.

When vision impairment is not treated properly, that individual can suffer from physical or mental health issues. Vision impairment results in a decrease in daily activities, including social skills and movement (Crews, 2004). Most people who are vision impaired often have difficulties connecting with the community because their disability makes working and speaking frustrating to maintain on a daily basis. Important connections will become difficult to make, including the making of new friends, resulting in isolation, which will have the possibility of leading into mental and physical issues. When nobody is willing to assist the vision impaired, they will be left independent on the dangerous streets with no guarantee of their safety. Isolation and vision impairment are directly “associated with increases in hip fractures, falls, depression, physician visits and hospitalizations, mortality, and family stress” (Crews, 2004). The environment and community is not structured in a way that makes navigation easy for the disabled. Now in the era of industrial innovation, the streets are packed with industrial machines, including small and large vehicles, construction zones, and hazardous debris. These lead to the increased possibilities of safety risks and injuries, especially for individuals who have difficulty noticing these risks on their own. Vision problems affect one’s ability to learn, be economically successful, have life satisfaction, and causes health issues (Stevens, 2013). Isolation will cause one to feel neglected from society; he or she will believe the community will not bother forming relationships with them. They will have increased feelings of life failures and reducing their quality of life. This ultimately leads to severe depression and stress, causing higher rates of feelings of mortality.

Current modern technology has the capability of creating sophisticated autonomous systems to assist the vision impaired. “The importation of elements from logic and formal deductive reasoning has provided a powerful basis for modelling and analysing argumentation in computational settings of AI” (Bench-Capon, 2007). Scientists are familiar with mathematical logic and algorithms, along with knowledge of physics and motion, in order to effectively develop an intelligent robot. Artificial intelligent systems are currently being developed, but most are being used for scientific studies rather than assisting individuals who have difficulties living happily in their community. Scientists “build machines and programs as a way of discovering new phenomena and analyzing phenomena [the world] already know about” (Newel, 2007). Using this concept, it is possible to develop a system that analyzes the known “phenomena” of vision impairment and engineer an autonomous device to learn, solve, and combat vision impairment to assist these people. Currently, scientists have been developing robotic devices to assist individuals who have limited body movement, as well as for scientific studies, which can be repurposed for assisting the safety of the vision impaired. Those who have difficulties seeing are challenged in navigating around their community. The concept of robotic movements that mimic that of humans can be used to assist the vision impaired in moving around obstacles and the surrounding more efficiently. Artificial intelligent robots are developed to “operate not where humans cannot go, but rather share space with humans in human environments. These robots are compelling not for reasons of mobility but because of their autonomy, and so their ability to maintain a sense of position and to navigate without human intervention is paramount” (Siegwart, 2011). Artificial intelligence has the ability to navigate around obstacles similar to how humans navigate. The electronics, mechanics, and algorithms used to accomplish this can be replicated into developing a device to assist in the navigation for the vision impaired. A guidance device would be created by using the concepts of human navigation algorithms and programming within artificial intelligent robots, and be used by the vision impaired, similar to how robotic arms are created to help disabled individuals move their limbs. With improved navigation and assisted movement, the visually impaired will feel more safe around their community and be able to live a life with greater independence.

Research Design

The goal of the project is to utilize laser and ultrasonic technology to detect obstacles for the person, providing directional feedback though vibration and audio warning systems. These components will be attached onto a belt that is worn above the waist. The product will be designed to maximize accuracy and reliability by addressing any errors during the accuracy data collection.

Materials

Equipment and Tools

  1. Soldering Iron and Solder
  2. Computer for Arduino IDE and 3D Modeling Software
  3. USB-A to USB-B cable
  4. Philip Head Screws and Nuts (¼”, 4-40)
  5. Philip Head Screwdriver
  6. Electrical Tape
  7. Measuring Tape
  8. 3D Printer

Electronics

  1. Three Vibration Motors
  2. Open-source Microcontroller
  3. VL53L1X time-of-flight Sensor
  4. Three HC-SR04 Ultrasonic Sensors
  5. Qwiic Shield
  6. Qwiic Cable - 500mm
  7. Stackable Header Kit
  8. MP3 Player Shield
  9. 6 Inch Jumper Wires (M-F, M-M)
  10. Barrel Jack Power Switch - M-F (3”)
  11. 9V Battery and Battery Holder
  12. microSD card (2GB)
  13. 3.5mm Headphone Jack Extender (M-F)

Other Project Parts

  1. Velcro
  2. Belt
  3. Cable Ties
  4. 3D Printer ABS Filament

Methods

To begin the research, a survey was developed for project participants to understand what their needs and wants are. Three blind individuals were contacted to discuss project goals and needs. After these interviews, the time-of-flight sensor was chosen as the distance sensor because of its small size, fast reading frequency, and a maximum distance reading of four meters. A basic circuit was built using a VL53L1X time-of-flight sensor and an MP3 player shield. Audio warning files were downloaded into a microSD card that went into the MP3 player shield. To gather the accuracy of the sensor, four tests were conducted in controlled indoor environments (no sunlight, no shade, no wind, no ambient noise, constant room temperature): accuracy in the light at 90 degree and 45 degree angles, and accuracy in the dark at 90 degree and 45 degree angles. The sensor was tested on five different types of objects: a person, clear plastic, white object, black object, and a metallic (reflective) object. Six different distances were set up to gather the data: 10, 20, 50, 70, 90, and 110 inches away from the sensor. 50 distance readings were taken for each of the six chosen distances and an average was taken as the final distance for the data. Based on the data and the survey conducted on the blind, a distance was determined for when a warning signal begins, which was determined to be 2 meters away. All algorithms were programmed to finalize the functionality and maximize the effectiveness of the product. To secure all the electronics and wiring onto the belt, 3D prototypes were created for the encasing of the sensor and microcontroller (Figures 2 to 8). Once all parts were in place, the product was personally tested and was ready to be tested on the blind, gathering final data about the product’s effectiveness. Additional improvements were made do the product, testing the accuracy of HC-SR04 ultrasonic sensors. The same data collection methods were used from the VL53L1X data collection. After concluding that the HC-SR04 was more accurate, new 3D prototypes were created for these sensors and a more efficient mounting system was designed for the belt.

Figure 2: Microcontroller and MP3 player casing for prototype one

Figure 3: Cover for microcontroller and MP3 casing from Figure 2

Figure 4: VL53L1X time-of-flight case

Figure 5: Microcontroller and MP3 player casing for prototype two

Figure 6: Cover for microcontroller and MP3 casing from Figure 5

Figure 7: HC-SR04 ultrasonic case

Figure 8: Cover for HC-SR04 casing from Figure 7


Figure 9: Prototype #1 utilizing time-of-flight, vibration, and audio technology


Figure 10: Prototype #2 utilizing ultrasonic, vibration, and audio technology

Results


Table 1: VL53L1X distance outputs in millimeter from the IDE console for multiple types of objects in light at a 90 degree angle


Figure 11: The VL53L1X time-of-flight sensor accuracy was tested in a bright environment, hitting multiple types of objects at a 90 degree angle (perpendicular to the object)


Table 2: VL53L1X distance outputs in millimeter from the IDE console for multiple types of objects in light at a 45 degree angle


Figure 12: The VL53L1X time-of-flight sensor accuracy was tested in a bright environment, hitting multiple types of objects at a 45 degree angle.


Table 3: VL53L1X distance outputs in millimeter from the IDE console for multiple types of objects in darkness at a 90 degree angle


Figure 13: The VL53L1X time-of-flight sensor accuracy was tested in a dark environment, hitting multiple types of objects at a 90 degree angle (perpendicular to the object).


Table 4: VL53L1X distance outputs in millimeter from the IDE console for multiple types of objects in darkness at a 45 degree angle


Figure 14: The VL53L1X time-of-flight sensor accuracy was tested in a dark environment, hitting multiple types of objects at a 45 degree angle.


Table 5: HC-SR04 distance distance outputs in millimeter from the IDE console for multiple types of objects in light at a 90 degree angle


Figure 15: The HC-SR04 time-of-flight sensor accuracy was tested in a light environment, hitting multiple types of objects at a 90 degree angle (perpendicular to the object).


Table 6: HC-SR04 distance distance outputs in millimeter from the IDE console for multiple types of objects in light at a 45 degree angle


Figure 16: The HC-SR04 time-of-flight sensor accuracy was tested in a light environment, hitting multiple types of objects at a 45 degree angle.


Table 7: HC-SR04 distance distance outputs in millimeter from the IDE console for multiple types of objects in darkness at a 90 degree angle


Figure 17: The HC-SR04 time-of-flight sensor accuracy was tested in a light environment, hitting multiple types of objects in darkness at a 90 Degree Angle.


Table 8: HC-SR04 distance distance outputs in millimeter from the IDE console for multiple types of objects in darkness at a 45 degree angle


Figure 18: The HC-SR04 time-of-flight sensor accuracy was tested in a light environment, hitting multiple types of objects in darkness at a 45 degree angle.


Table 9: Calculated percent error for each set of data from Tables 1 to 8. The lower the percentage, the more accurate and reliable the sensor is in the given angle and light conditions

Data Analysis

The types of objects for testing were specifically chosen to imitate obstacles one would encounter in the streets or inside commercial buildings. A person was used to imitate the people walking on the streets. A clear plastic or glass object was used to imitate the glass windows of buildings. The black and white objects were selected to provide a range of accuracies between darker colors and lighter colors for all the visible light spectrums. A metallic object was used to imitate vehicles and metal poles while walking on the streets. Based on the data gathered, the VL53L1X time-of-flight sensor is more accurate at shorter distances (from 0 mm to 508 mm), and begins to have a larger range of error as an object moves further from the sensor (from 1016 mm to 2794 mm). To ensure that the user has enough reaction time, the warning should be set to two meters. Additionally, the VL53L1X sensor sends out class one lasers, which can be obstructed based on the material it hits. These lasers are light waves emitted from the VL53L1X, which will be obstructed by outside sources of light, including artificial lighting, such as street lights and vehicle lights, as well as sunlight. Light can also be affected by absorbance, where darker colors absorb light better than lighter colors. Based on the data, black objects negatively affects the accuracy of the sensor. Black objects absorbs the laser, causing a much longer and inaccurate reading from the sensor. Clear objects negatively affect the accuracy of the sensor only at a 45 degree angle (Figures 12 and 14). Clear objects reflect the lasers away when it was being hit at an angle. The infrared lasers are being reflected at a 45 degree angle, causing a much longer and inaccurate reading from the sensor. According to the data, metallic objects did not significantly decrease the accuracy of the sensor, despite it being the most reflective object tested. This is because of how time-of-flight sensors operate, where the laser emitted does not reflect in one direction, but rather scatters multiple lasers at different angles after it reaches an object. This strong reflectance of metallic objects help in scattering the lasers better than any other object, so even if the metallic object is at a 45 degree angle, the lasers are still being scattered towards the VL53L1X sensor. This is not the same for clear objects because of their lower reflective properties, which causes the lasers to have a lower chance to scatter back towards the VL53L1X sensor. If the clear object were to have close reflectance properties as metallic objects or a mirror, the clear glass/plastic data in Figure 12 and Figure 14 will match that of metallic objects in the same figures. The ambient light level also affects the distance reading, where bright ambient light in the afternoon causes shorter readings while darker ambient light causes longer readings. This is found to be great factor while testing the product in sunlight. Since the sun produces a significant amount of UV and IR rays, they interfere with the light sensors within the VL53L1X. The VL53L1X time-of-flight sensor utilizes infrared lasers to determine distances, so the IR rays from the sun cause the sensor to detect infrared lasers much closer than it should. Therefore, the sensors will always produce readings under one foot during harsh daylight, as compared to indoor lighting which causes negligible fluctuations for the sensor. Additionally, the sensor sometimes fluctuated below one meter even when the actual distance was greater than one meter due to minor dust particles or small objects interfering for a brief second.

The HC-SR04 ultrasonic sensor was determined to be more accurate and reliable when detecting objects in front, perpendicular to the sensor as sound waves do not get obstructed by a difference in light nor a difference in the material or type of object it is hitting. This is because the HC-SR04 produces sound waves while the VL53L1X produces light waves. The sound waves produced by the HC-SR04 have a frequency of 20,000 hertz and does not get obstructed by light because sound waves are not based on light. Human hearing is limited to 20,000 hertz, so there will be minimal objects that will produce sounds with a frequency over 20,000 hertz in the streets. Therefore, it is unlikely for the HC-SR04 to be affected by sound obstructions in the streets. However, the problem with the HC-SR04 is that, unlike the VL53L1X sensors, the sound waves do not scatter when it interacts with an object. As a result, having an object at a 45 degree angle significantly decreases the accuracy of ultrasonic sensors as the sound waves reflect away from the sensor and does not read the sound waves. This is what happened in figures 16 and 18, where all of the distance readings for all of the objects tested were greater than the actual distance. Additionally, unlike the VL53L1X sensors, the HC-SR04 sensors produced distance readings slightly lower than the actual distance. This will not cause a problem as a lower distance reading provides a greater reaction time for the person wearing the product.

For the first prototype utilizing the VL53L1X time-of-flight sensors, in order to minimize the errors received from the VL53L1X sensor, an algorithm was programmed to select the maximum distance over 15 distance readings. If an obstruction was detected for less than a second, that reading will be disregarded and no warning will be transmitted. This is to make sure a warning is not relayed to the user when no apparent danger or risk is approaching. However, this does not solve the issue with harsh sunlight. For the second prototype utilizing the HC-SR04 ultrasonic sensors, in order to minimize the inaccurate distance readings for objects at an angle, three sensors were used for the product. One detects objects in front, while the other two detects objects at a 45 degree angle in both directions. This ensures that if there is an object in front that is at an angle, the front sensor will not be able to recognize it, but one of the angled sensors will. Both of these products should be used alongside the white cane, as the survey results suggests that the white cane is useful when detecting objects on the ground. Any obstacles that are not in contact with the ground cannot be detected by the white cane. This product should be used to help in that blind spot, detecting raised objects while the white cane is detecting ground objects or holes on the ground. Additionally, according to the survey, this device should only be used for individuals who are experienced white cane users because one cannot solely rely on technology. At a young age, most blind individuals are learning to effectively use a white cane. If they are exposed to a device that is capable of detecting obstacles, they will prefer using that device over the white cane, which is a safety concern for that person and others around them.

Conclusion

The product is in the form of a belt that is recommended on the chest, secured using velcro. For the first prototype of the product, the VL53L1X distance sensor is encased in a cube and should be positioned at the center of the body facing forward. The distance sensor sends class one lasers that reflects off the object in front of it, capable of detecting obstacles up to 4 meters (roughly 12 feet) away, and begins warning the user at 2 meters (roughly 6 feet) through a slow “beeping” from a 3.5 mm headphone jack. At 1 meter, the “beeping” sound becomes quicker and the vibration motors activate in which the user can feel around their body. At 400 millimeters (roughly 16 inches) away, the “beeping” sound becomes steady and the vibration motors are vibrating at maximum power to warn the user that they are about to come into contact with an obstacle. The final prototype of the product consists of three HC-SR04 ultrasonic sensors positioned forward and angled to the left and right sides. Each sensor has their respective vibration motors, so the wearer knows which direction the obstacle is. The same warning distances were used for this prototype and the first prototype. The concept of the product will be most beneficial for the blind who are not yet independent or comfortable navigating alone on the streets. The device, however, will not completely replace the white cane. The purpose of this device is to help in detecting raised objects where the white cane cannot detect. Ideally, a product should be created utilizing both the VL53L1X time-of-flight sensors and HC-SR04 sensors, so if one were to provide false information, the other sensor will be able to provide the correct information to the user.

After the product testing procedures, feedback was obtained for the product. The user had enough reaction time to stop any movements before coming into contact with obstacles. The blind individuals stated that the device was easy to use and understand, so the device can be utilized for most age groups. However, the device should be used by those who are experienced cane users, as this device should not be prioritized over the white cane. Overall, these devices (Figures 9 and 10) are beneficial to detect objects in the user’s blind spots, improving their safety while navigating around the streets or in large buildings.

Science Fair Poster Board

Click Image to Enlarge