Blog 10 – Reflections & Projections

Well, this is my last blog post for my Human-Machine System Design course at Tufts University.  Rather than talk about a topic proposed in lecture, this post will be a chance for me to reflect on the course, its content, and how it has influenced my predictions regarding the future of human-machine system design.  

My two favorite topics that we talked about in this course were machine learning and the importance of task analysis.  

I’ll admit that the main reason that I was excited about the machine learning topic is that it is something that has been mentioned many times in my conversations with engineers and computer scientists, but I never had the opportunity to learn more about it until this semester.  This course provided me the opportunity to learn a bit more about how machine learning works. This helped me learn what machine learning can actually do and what its current limitations are. I really liked the resources that Professor Intriligator shared with us regarding this topic.  My favorites were the Neural Net Playground (allowed us to manipulate a neural networks structure and see how that changed its performance) and the Quickdraw website (allowed us to add data samples to a machine learning project in a fun way).  

Growing up, I was actually exposed to a lot of task analyses.  I knew many people my age with intellectual disabilities who needed certain activities to be listed in extreme detail in perfect order, or else they would not be able to complete a task.  Because of this, I was already aware of the importance and difficulty of creating a truly good task analysis before starting this course. However, this course directly related the concept to my engineering work in a way I had not thought about before.  Additionally, as someone with a background in the subject, I thought Professor Intriligator’s lessons regarding the topic were still very informative. I had never thought about classifying tasks the way we learned to do in class (“Information, Analysis, Decision, Action”).  

I think that the course content was very good.  The end-of-the-year project, where we used GiggleBots, was interesting.  I liked the freedom we had when designing our GiggleBot and trying to focus on different aspects of its design.  It allowed us to be creative and see if we knew how to apply the concepts we had been learning about all semester. 

The GiggleBot we used for our project. We decided to have a holiday theme for our project.

Overall, I really enjoyed this course.  However, it has made me somewhat concerned for the future of human-machine-system-design.  While we learned about amazing possibilities from an engineering and design standpoint, we also learned about the lack of limitations and regulations regarding many technological developments.  It has made me more aware of the way that technology has the ability to manipulate users. For example, Spotify can influence your music selections. Arguably, they could promote certain artists to people they know would like their music, and ultimately control many performers’ ability to succeed in the music industry.  Currently, we have to trust that Spotify is not manipulating the industry, though there are no ways to easily stop them if it turns out they are.

Spotify logo.

On a more positive note, this course makes me very hopeful for the future of assistive technology (In case this is my first blog post that you are reading, I will let you know that this topic is very important to me).  This class shows that there are some amazing students and professionals that are working to educate themselves and improve their ability to make usable devices that reflect the needs of all individuals. I look forward to seeing the improvements that come about in assistive technology in the next 100 years.  

I think that machine learning in particular will play a large role in assistive technology in the future (as I mentioned in my fifth blog post!).  Though the role of artificial intelligence in this field will increase, I believe that humans will play a crucial role in the process for a long time.  This is not because I think machines will not be good enough at designing assistive technology. Rather, I think that humans will remain involved in the process because designing assistive technology helps foster respect and acceptance of those with disabilities.  

Overall, I really liked this course, and feel that I have learned a lot.  I am happy to have had the opportunity to reflect on what I’ve learned, because it was more than I initially realized.  I think that the lessons I have learned here will follow me throughout the rest of my engineering career.  

Thank you for reading this blog and following me on my journey throughout this course!


Blog 9: Chatbots and Assistive Technology

Chatbots are a pretty popular software application.  Interacting with a chatbot is meant to be reminiscent of communicating with another human via instant message, email, or text.  Not all chatbots are the same: some are extremely simple and limited (e.g. only able to respond to specific prompts), while others are more complex (e.g. able to use machine learning to improve their communication abilities and be able to accept a wider variety of commands). 

For more information about chatbots, see the video below:

Chatbots can be a very easy user interface for people to interact with.  Rather than having to learn a programming language to interact with a computer, or even just search through a bunch of menus or webpages, users can type in requests and commands to a chatbot in order to get the information they are looking for.  In my opinion, a good chatbot should take much less time and skill to learn how to use than most modern technological programs. 

Unfortunately, current chatbots are not always the most useful, because they usually can only respond to pretty specific prompts (e.g. “show me recipes”).  Future programs will hopefully be better at interpreting variations in conversation style between users and still understand what is being requested (e.g. “show me recipes” & “What are good recipes?”).  I think that the usefulness of chatbots in assistive technology will need to be revisited in the next few years once chatbots become easier for naive users to interact with. .    

However, there is already some research being conducted that explores the usefulness of chatbots as assistive technology devices.  Here is a link to an article, “Chatbots as a User Interface for Assistive Technology in the Workplace” by Bächle et. al.  The research conducted by this group concluded that “the chatbot technology is a viable technology for certain types of ambient assistive technology.  However, it comes with some limitations and it is not suitable for all target groups.”  

How do you think chatbots can be used in the realm of assistive technology?  Let me know in the comments below!

Blog 8-The Internet of Things (IOT) & Assistive Technology (AT)

The Internet of Things is defined by the Oxford Dictionary as “the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data”. 

Some examples of popular IOT devices released in 2019 are the Amazon Echo and the Belkin WeMo Smart Light Switch. 

Back during my undergraduate education at the University of New Hampshire, I worked in the Occupational Therapy department for Dr. Therese Willkomm.  One of Dr. Willkomm’s talents is creating assistive technology solutions out of cheap and/or easily available materials.    

Dr. Willkomm uses Apple products a lot in assistive technology problem-solving.  One feature she uses is Siri, Apple’s voice-controlled intelligent assistant.  

What is great about Siri is that it is compatible with many other IoT devices.  This allows you to use vocal commands to control multiple features in your house.  For example, you might try and connect your iPhone to your Smart Light Switch. This is useful for people that cannot reach lights that are in inconvenient places.  

I have a friend that has a smaller range of motion when using her arms than most people that worked in our office.  This presented a challenge when turning on the lights, because the lightswitch was located between a wall and a shelf, so you had to use full extension of your arm to easily reach it.  The solution was to use a lightswitch adaptation that covered the switch, turning it into a push button, and was synced up with its iPhone app. This alone is an example of an IoT device being used as an assistive technology solution.  However, to make the switch even more accessible, the app could be connected to Siri so that the lights could be turned on and off using vocal commands instead.  

AT solutions are not limited to devices which are readily available and intended for people with disabilities.  Some AT-specific devices aren’t available to people due to financial constraints. Utilizing more popular devices that are part of the Internet of Things can allow people to make high-tech solutions for significantly lower costs.  

If you are interested in learning more about the relationship between IOT and AT, here is an article by Scott Hollier and Shadi Abou-Zahra. 

If you are interested in learning more about Therese Willkomm’s work in general, here is one of her videos:

What popular IoT device can you think of that might be a useful assistive technology solution? Let me know in the comments below!

Blog 7: GPS & GIS in Assistive Technology

Most people in the United States know what GPS is.  It’s a data point that records your location on the Earth using satellites that orbit the planet.  In our personal lives we view this data in platforms such as Google Maps or in the Find My iPhone app.  However, a GPS data point is actually only the latitude and longitude coordinates.  That is not something that humans can easily interpret.  That is why these platforms show the coordinates of locations we care about with respect to a map or some other reference.

An example Find my iPhone screen showing the location of an electronic devicehttps://support.apple.com/en-us/HT210515

These platforms for sharing this GPS information are examples of geographic information systems (GIS’s). 

Today, I am going to talk about the importance of GIS’s in assistive technology.  Imagine this hypothetical scenario:

You are a full-time wheelchair user who is meeting their friends to see a play at an old theater.  The theater is on the corner in a busy urban area, so though it does not have a designated parking lot, there are handicapped parking spaces on the streets on either side of the building (Street 1 and Street 2).  Your friend tells you that the front doors are on Street 1, so that is where you decide to park your car.  However, when you get to the front door, you see that the entrance has a flight of stairs leading up to it, so you can not get in.  You follow the signs toward the handicapped accessible entrance, but it is all the way down at the other end of Street 2.   If only you had known that Street 2 was the street with the handicapped accessible entrance, you would have parked there!

This is a very specific example, but there are cases all the time where it is useful to have accessibility information for an area.  One such area is college campuses.  To see some example accessibility maps for college campuses, you can click here for one by Harvard University and here for one by MIT. 

Though it is awesome to see maps like this becoming available for people with disabilities, there is still a lot of work to be done in this area.  Google Maps has released wheelchair accessible routes, and hopefully will continue to expand their platform to be even more useful for people with disabilities.   

What do you think of GIS?  How do you think it can be used to improve assistive technology?  Let me know in the comments below!

Blog 6: Socially Assistive Robotics (SAR) and Autism Spectrum Disorder (ASD)

I find socially assistive robotics to be a very interesting field.  I think that these robots hold a lot of potential in helping humans, but there is also the potential to do harm.

Specifically, I’m going to talk about socially assistive robots (SARs) that are meant to help people with autism spectrum disorder (ASD) improve their social skills.  To learn more about this area in more detail, you can read the article, “Socially assistive robots: current status and future prospects for autism interventions” by Dickstein-Fischer et. al. This discussion, however, is going to be more about the general application of SARs in this area, rather than more specific cases.

Nao Robot by Aldebaran, sometimes used as an SAR. The picture is taken from this article.

The Positives

First, I want to talk about some of the very useful aspects of SARs in helping people with ASD improve their social skills.  The article mentioned above lists some very compelling reasons that are very prevalent.  SARs are more accessible, affordable, and result in less administrative burden than current practices.  This has been researched pretty significantly, to the point that I suspect SARs will become commonplace for this application very soon.

My Concern

However, there is a big problem that I don’t think is being considered enough. 

My brother is two years older than me and has autism spectrum disorder.  Sometimes people are surprised to learn that out of the two of us, my brother was infinitely more popular than me by the time we were both in high school (and I was by no means antisocial). 

Now, part of the reason my brother was so popular was his personality: his kindness, good humor, and general likability has always made people like him.  However, he still had to overcome the prejudice that people have toward individuals with intellectual disabilities.  This is not necessarily blatant and aggressive prejudice, sometimes it is more subtle.  People aren’t always naturally inclined to try and hang out with people different than themselves. 

One thing that I think helped my brother get used to other people, and helped other people get more used to him and people similar to him, was the human interactions he had throughout his childhood.  This included his peers, but it also extended to his instructors, family, and doctors. 

I think that even if SARs end up being just as effective at teaching social skills as human specialists, we are potentially exacerbating the already prevalent discrimination that people with autism spectrum disorder face.  We are removing one of their crucial moments of human interaction from their childhood. 

So, What Should Be Done?

Honestly, I don’t think we will stop using SARs, even if they are proven to be less good than human instructors.  The financial benefits alone will be enough to encourage them to become commonplace. 

Rather than stop progress with SARs in educating people with autism spectrum disorder, I hope that social reform occurs that makes the currently invaluable interactions with human specialists unnecessary because children with autism spectrum disorder have sufficient social interaction with other humans through other means. 

What do you think?  Do you think this is a problem?  If so, how do you think it can be addressed?  If not, do you have any other concerns about SARs? Let me know in the comments below!

Blog 5 – Machine Learning: The Next Big Thing in Assistive Technology

When one thinks of modern assistive technology, they often think of things like hearing aids or automatic wheelchairs.  If they’re thinking of less technologically complex devices, they may think of things like crutches or communication boards. 

(If you are less familiar with assistive technology for people with autism spectrum disorder, you might not know what a communication board is.  To learn more about what they are and how they are used, click here.)

Communication Board. Link: https://www.amazon.com/EZ-Patient-Communication-Board-Picture/dp/B003JUUI0K

One of the more modern advancements in assistive technology is the application of machine learning.  For those unfamiliar with the concept, machine learning is a computer program that can adjust the way it processes information using performance feedback in order to provide better interpretations of data.  To learn more, I highly recommend the “Deep Learning” video series by 3Blue1Brown.  The first video in the series is included below:

There are many assistive technology applications for machine learning.  One very popular area is speech-to-text software that improves its accuracy using assistive technology. 

Captioning is very useful to some people with disabilities.  For example, people with hearing loss can use captions to understand the content of a video.  However, captioning is a very time-consuming process when done by a human and considering 300 hours of video are uploaded to YouTube every minute, it is not efficient to have this process done by humans.  So automatic captioning is a very useful tool to expediate this process.  

But automatic captioning has had its flaws.  When the automatic captioning feature was first released, it became very popular to joke about how inaccurate the captions were.  Though this can be quite comical (YouTubers Rhett and Link actually created a comical web series using the automatic captioning feature) , it also is problematic for people who rely on those captions to understand a video’s content.  Machine learning is helping to make captions more reliable.

YouTube uses speech recognition and Google’s machine learning software to improve its automatic captioning system.  The software has gotten so advanced, that it can now actually recognize sounds beyond speech, such as clapping.  To learn more about this, you can go here or here.

There are many more possible applications for machine learning to help people with physical disabilities.  Do you have any ideas on how to use machine learning to help improve assistive technology?  Let me know in the comments below!

Blog 4: Future Humans

The past few lectures in Human-Machine System Design have covered the topic of “Future Humans”.  Within this topic we have discussed drastic surgical alterations to humans (including adding advanced technology into a human system), the potential for significant changes to a human’s environment, and the evolutionary changes that may result from modern technology.  The whole topic is reminiscent of science fiction in many regards. 

I am excited about the idea of future humans related to the realm of physical disabilities and medical ailments.  Currently, there are many diseases which are incurable, or that leave lasting physical limitations for people.  However, future humans may be able to fight through these diseases, either through advanced medical treatments or through human evolution. 

It will be fantastic if one day humanity is not losing people to illnesses like cancer or Huntington’s disease.

The pink ribbon is often a symbol for breast cancer awareness.

However, on the other side of things, I hope that developments in medical technology do not limit our compassion for those with physical disabilities.  Even though future technology will hopefully cure some illnesses, it is likely that new ones will develop.  As we develop methods to cure some illnesses, there will be others that even future humans will lack the knowledge/technology to cure. 

Engineering solutions will not resolve the prejudice and discrimination that people with physical disabilities face.  I do truly believe that future humans will develop great cures for complicated medical challenges.  I hope that future humans will also have made as great strides in practicing their kindness and empathy for those with physical disabilities.

Blog 3: Signal Detection Theory

In Professor Intriligator’s class, we have been talking about Signal Detection Theory.  According to Nicole D. Anderson, “The general premise of SDT is that decisions are made against a background of uncertainty, and the goal of the decision-maker is to tease out the decision signal from the background noise (Anderson 2015).”

For those less familiar with this area, here is a real-world example: if you are in a crowded area trying to listen to your friend as they speak, you need to filter out the other noises in the room and focus only on what your friend is saying.  But sometimes you think your friend something, but actually it was a stranger.  Filtering this information and figuring out what was actually said by your friend is an example of signal detection. 

When learning about Signal Detection Theory, I became curious about the cases where people have intentionally taken advantage of flaws in signal detection software.  One of the cases that came to mind is the movie trope where someone fools a fingerprint scanner using common household items.  Though in the real world it is not necessarily this easy, it turns out it is not very hard to fool fingerprint sensors. In fact, it only takes 13 minutes.

At least, that is the case with Samsung’s Galaxy S10 phone.  The fingerprint scanner was proven to be fooled by a 3D-printed model of the user’s finger.  The finger was able to be printed in 13 minutes. 

At first, I thought that it would be rather difficult to 3D print someone’s fingerprint, especially without their permission.  But, it turns out that all it took was a photo of someone’s fingerprint on a wineglass and some Photoshop. 

As people become more reliant on fingerprint scanners and other sensors, we need to more seriously consider their signal detection capabilities.  Currently, I fear them becoming a security risk with long-lasting consequences. 

I use the fingerprint scanner on my personal phone, but I also avoid putting my financial information or any important documents on my phone.  Do you trust the security provided by a fingerprint scanner?  If not, what would need to change before you trust it?  Let me know in the comments below!

Blog 2-Task Analysis

Hello again everyone!

This Friday’s blog will be all about Task Analysis!

For those of you who don’t know, let’s first define what is a task analysis. In lecture we were introduced to the following formal definition:


The study of what an operator (or team of operators) is
required to do (their actions and cognitive processes) in
order to achieve system goals.

(Stanton , Salmon, Walker, Baber & Jenkins, 2005, p45)

For every process there are steps required to complete it. A task analysis helps determine what these steps are, what tools are required, what background knowledge is necessary, and how long it will take before a step is complete.

Before creating a task analysis, I think it is extremely important to establish the following:

Why are you creating a task analysis? Who is it for?

This is important, because it impacts your approach when creating the rest of your task analysis.

Now, since I love assistive technology, I’m going to make
up an assistive technology example. I will also discuss why in assistive technology
specifically it is important to know why you are developing the task analysis.

So, here is our scenario:

David is a man who is blind. He always bought his tea from a cafe down the street from his office. However, to save money, he wants to learn how to use the electric kettle in his company’s kitchen. He asks us to help him learn how to use a tea kettle.

Electric tea kettle

In order to help David, we are going to do a task analysis. Initially, we break the process down into four steps.

  1. Make sure the kettle is plugged in
  2. Fill the kettle with water
  3. Turn on the electric kettle
  4. Pour the hot water into the cup and add tea.

We meet David in the company’s kitchen. We tell him the process, and already he has some questions.

  • If the kettle isn’t plugged in, where is the outlet located?
  • How do I know when the electric kettle is done heating up the water?

Now, in the case of David, it is probably easiest to just physically guide him to the outlet, and assume he will remember where it is in the future. But what if you were giving these directions to David remotely? Is it your responsibility to also help David find the outlet, or would you consider that something that you shouldn’t include in your directions? You have to decide what level of detail you need to give David. In this scenario, I would say the right amount of detail is however much David wants from you. In a more general understanding, your client or user will largely influence the level of detail that your task analysis needs.

Many electric kettles have visual indicators that it is done heating up the water. A light turns off or a switch changes position. But David cannot see those visual cues. What else happens when the kettle is done boiling the water? When the switch returns to its original position, it makes a click noise. When the kettle is done boiling, the water stops making noise as it stops moving in the container. It is your job to make sure that the information you provide is useful for your audience.

Even if you are not interested in assistive technology, this is an important factor to consider. Often, task analyses are completed to determine if a process once done by a human can be taken over by a robot. However, a robot probably has a different way of processing information than a human worker. In this case, a task needs to be explained at a level of detail and in such a way that the steps can be translated to match the robot’s capabilities.

When doing a task analysis, it is important to think beyond the way YOU would do something. You need to also consider how your audience would do it. Otherwise, you have created a set of instructions that aren’t useful for your client.

Task analysis can be challenging. I recommend having people with no knowledge on how to complete the task try and do it based solely on your task analysis. Based on both their interpretation of the directions and their feedback, you can iteratively improve your task analysis.

Now, based on what we’ve discussed, the task analysis we gave David can definitely be improved. What would you change about the task analysis? Let me know in the comments below!