When one thinks of modern assistive technology, they often think of things like hearing aids or automatic wheelchairs. If they’re thinking of less technologically complex devices, they may think of things like crutches or communication boards.
(If you are less familiar with assistive technology for people with autism spectrum disorder, you might not know what a communication board is. To learn more about what they are and how they are used, click here.)
One of the more modern advancements in assistive technology is the application of machine learning. For those unfamiliar with the concept, machine learning is a computer program that can adjust the way it processes information using performance feedback in order to provide better interpretations of data. To learn more, I highly recommend the “Deep Learning” video series by 3Blue1Brown. The first video in the series is included below:
There are many assistive technology applications for machine learning. One very popular area is speech-to-text software that improves its accuracy using assistive technology.
Captioning is very useful to some people with disabilities. For example, people with hearing loss can use captions to understand the content of a video. However, captioning is a very time-consuming process when done by a human and considering 300 hours of video are uploaded to YouTube every minute, it is not efficient to have this process done by humans. So automatic captioning is a very useful tool to expediate this process.
But automatic captioning has had its flaws. When the automatic captioning feature was first released, it became very popular to joke about how inaccurate the captions were. Though this can be quite comical (YouTubers Rhett and Link actually created a comical web series using the automatic captioning feature) , it also is problematic for people who rely on those captions to understand a video’s content. Machine learning is helping to make captions more reliable.
YouTube uses speech recognition and Google’s machine learning software to improve its automatic captioning system. The software has gotten so advanced, that it can now actually recognize sounds beyond speech, such as clapping. To learn more about this, you can go here or here.
There are many more possible applications for machine learning to help people with physical disabilities. Do you have any ideas on how to use machine learning to help improve assistive technology? Let me know in the comments below!
Such an interesting blog! I am a big fan of automatic captioning when I don’t have headphone in public and want to see a video or try to understanding a video of another country language. While there are still many flaws now, I believe the accuracy will get better and better.