As ENP162 comes to an end, I would like to take time to reflect on my experiences in Human-Machine System Design. This class can definitely be characterized by its breadth of material as we covered several relevant and important topics from iOT to conversational interfaces. The most important skill I learned from this class was properly writing a task analysis. Task analysis is critical at any stage of product development because it clearly lays out how the user will interact with the product, and the different nuances that need to be considered in the design. Before this class, I had a novice grasp of writing task analysis, but developed the intuition of writing them.
In addition to skills gained, the topic that captivated me the most was GIS, because I never have been exposed to it before and realized it provides great insight into any subject. For instance, the in-class activity where we calculated the density of blue lights on Tufts campus and analyzed its safety impact was very fascinating. Therefore, the GIS assignment should be included in future assignments because it adds dimensionality to our skills as engineering psychologists. Additionally, a topic like iOT is very relevant and encompasses the human-machine system, usability, and data storage principles. Adding a project in this area where each team builds an elementary iOT device of some sort would be challenging, but also a great learning experience. If this is not possible, then expanding upon the role of UX researchers and UI designers in the iOT realm would also be very valuable.
My perspective on the future of human-machine system design is that machines will have an increasing presence in our society, slowly removing the “human” aspect of it. Within the next 20 years I think we are going to see a paradigm shift where the machines will end up building machines. They will have the tools and algorithms to precisely design and build everything more cost-efficiently than humans. Since I see the human value in human supervision of these machines, I predict that most jobs will be just “attendants” or supervisors where they know everything about the machine and ensure the tasks are running smoothly. If this happens, the job ecosystem will change drastically and several professions today will be obsolete. I guess we just have to wait and find out!
Over the past few years, conversational interfaces have been integrated in several companies products including Apple, Google, and Facebook. A conversational UI basically provides an alternative way of interacting with technology. For instance, Siri has the capability of accepting voice commands and sending out text messages, giving the user an opportunity to communicate with their friends without texting. I definitely understand the advantages of conversational UI, but also believe that they have a lot of downsides.
Personally, my parents exclusively use Siri to craft their text messages and it pains me to see them try to fix the errors of the dictation, or receive messages that are error filled to the point that its cryptic. I want to emphasize “conversational” in that computers and other technologies should have limited capabilities in this realm. For instance, major finance companies are starting to implement AI that can report the performance of a user’s portfolio. However, when talking about a subject of high importance like personal finances, it is more comforting for a user to hear a report from their broker because a human can provide an organic conversation and emotions like empathy. Although, I have a negative perspective regarding the way conversational UI is influencing the behavior of people like texting, and the fact that their value is not needed in certain areas like finance, I do believe that they have advantages in accessibility.
Conversational UI in a device is critical for those that are visually impaired because voice commands are sufficient enough to use the device. As a society, we are constantly striving to achieve accessibility and equality for all in technology, and UI like Siri is a paradigm of achieving a part of this goal. Therefore, the development of conversational UI is necessary to accommodate all users, even though I believe some of its applications might not be the most efficient for visually capable users.
We use several iOT devices throughout the day including our phones and computers. This device can be categorized as a piece of hardware that can send data from some internet connection. Even cars are iOT devices as it is collecting various forms of data from their users, and can connect to other iOT devices such as your phone. I always thought of iOT as a rapidly growing space in technology as it will make both consumers and cities smarter, but what are the limitations and vulnerabilities of these devices?
Even though these devices are growing to be very powerful, their vast capabilities could be employed for the wrong reasons. For instance, a nation could take control of all the thermostats of their enemy nation, and shut them down during the weapon. This could completely change the warfare landscape and exhibits that the idea of iOT can be dangerous in that the bandwidth of connection is pretty vast, and could be leveraged for the wrong reasons. With great power comes great responsibility, and these devices should not sacrifice security and protection for greater, faster connectivity. Although the idea of the objects that we interact with can learn more about us and create an ecosystem with other smart devices, it just gives attackers more opportunities to infiltrate our data.
As iOT devices advance, it is vital to consider the cybersecurity implications and not prioritize convenience and efficiency.
This week in class we explored the GPS/GIS space, which is an area that the engineering psychology curriculum has not really covered. Since it was a new topic, I had an open mind and was very impressed with the role GIS has in our society. For instance, GIS maps can depict rising sea levels along certain cities and what area will be impacted the most. The implications of these maps can shed light on the importance of issues like rising sea levels, which may be overlooked as a non urgent problem, and enable society to make smarter decisions. Additionally, we calculated the density of emergency blue lights on Tufts campus, indicating the relative safety of different parts of campus. Therefore, GIS systems are pertinent in several different aspects, and can be used by engineering psychologists to gain more insight on users.
Social robots is a very fascinating subject. With the capabilities of machine learning and artificial intelligence, robots can be programmed to essentially act as a human companion. They can educate, entertain, and provide therapy for their human friend. A particularly interesting area in social robotics is human therapy for those that are autistic. A study is being conducted by Brian Scasselleti, a Computer Science professor at Yale University, on the effect of robotic interactions on autism. Eight families have participated in this study and have adopted a robot in their home, and the results have been very successful. The parents have reported that the robots have introduced new educational techniques and have been a very positive influence.
However, this study raises the question of what distinguishing factors are required for robots to fully make an impact in therapy? Studies suggest that robots gaining a sense of agency over its actions is vital in order to make the interactions with the person as organic as possible. For instance, a recent study exhibited that humans treat robots differently than humans when they are playing a game unless if the robot cheats. This is attributed to the fact that robots are usually programmed to act in a one dimensional way and have no sense of anything beyond the rules. It is not very “robotic like” to cheat, which changes the way humans perceive their robot. I am not proposing that robots should engage in human flaws to gain a better connection with their human friend, but a successful social robot will require additional programming to make it more interesting and stand out.
Music is a major part of my identity, and several people across the world will say the same, speaking to its impact on their lives. Songs manifest the ability to cultivate our curiosity, keep us energized throughout the day, and provide an escape from reality. We all seek to have this type of experience, and Spotify is giving it to us through their incredible personalization features. The most notable one, in my opinion, is their discovery weekly playlist. Every Monday, 70 million Spotify users can look forward to enjoying a playlist of 30 songs that match their music taste. As an electro-indie fan, I have truly “discovered” from the Discover Weekly playlist my favorite songs. I can definitely say that Spotify knows my music better than anyone else, but how? How is Spotify able to capture our own unique preferences? The answer: an underlying Machine Learning algorithm that can explore the vast realm of music to deliver us the music we want to hear.
Here is an overview of a vital algorithm that Spotify has implemented to make this feature possible:
Convolutional Neural Network (CNN): A CNN can be pretty complex to wrap your mind around. How does it exactly work? Every machine learning model needs input, and for Spotify’s CNN it is an array that contains information about the frequency, duration, and amplitude of a particular note during a song. After organizing the important components of the song in a matrix/array, it is multiplied by another array called a filter to determine the presence or absence of a certain feature of a song (tone, melody, pitch, mood etc.). For instance, if the product of heavy bass filter array and the song input array is zero, then we can assume that this is probably not a Bassnectar tune. After this iterative process, a global temporal pooling layer is applied to generate insightful statistics like the mean and max occurrences of a certain aspect of a song such as a chord progression. This only helps to further categorize a genre of the song from the audio signals, and is further used as input into what is called a fully connected layer. This computes a single dimensional array consisting of values that specifically correspond to that song. Now that our morning “pump ups”, chill rewinds, and summer jams can all be represented by an array, Spotify can find 30 other arrays AKA songs that have the closest matching values. This what allows Spotify to keep fueling our musical passion and expose us to sounds we might not ever have encountered. I am fascinated to see how Spotify will leverage neural networks and Machine Learning to further enhance the way we connect with music.
Over the course of the last century, technology has rapidly advanced beyond what I initially perceived was even possible. In the mere future, self driving cars could be fully integrated into our society, which is a great example of a type of an autonomous technology that will completely change the societal landscape. However, an advancement like a self driving car adds convenience and does not add capability to the human. I envision the next technological strides as something that will strengthen what we can do as human beings, like AR goggles. Some might not agree with this statement, but I believe a more robust version of AR goggles or contacts will eventually replace smartphones. This will enable people to use all the features something like a smartphone provides, but they can access it any point. Even though glasses might not be aesthetically pleasing, smart bionic contact lenses is something that I foresee being adopted.
Pertaining to the smart contact lenses, this will be a very valuable product and be an important aspect of someone’s identity. The capabilities of this device is limitless as it will not be able to access the internet, but also provide key insights on a person’s health. This device can collect a person’s tears and use it as a biomarker to analyze glucose, sodium, cholesterol level etc. However, certain issues arise with this device including its invasiveness/irritation and preventing the circuitry from blocking user vision. Nonetheless, it is definitely not out of the realm of possibility that these lenses replace smartphones. The big question is how to develop these lenses to minimize invasiveness while sustaining its capability.
Signal detection theory measures the accuracy of pinpointing the presence of some kind of “signal” or stimulus. The word “signal” changes meaning depending on the situation or the example. For instance, if someone gets injured, the doctor’s analysis can be measured using signal detection theory. An example of a “hit” would be if the person pulls a muscle, and the doctor correctly diagnoses the injured person (response-yes). A false alarm scenario occurs when the doctor observes an injury that is not there, a miss is when the doctor does not correctly observe the injury, and a correct rejection is when the doctor correctly realizes there is no injury. It is interesting to observe that signal detection can be applied to different types of situations.
Besides observing scenarios where signal detection theory applies, this method might be a critical indicator of whether certain technologies will be integrated into society. For example, the number of hits and correct rejections made from a self driving car is imperative for the safety of society. A “signal” could be a pedestrian crossing the street in which case the car’s ratio of hits and correct rejections needs to be 100% to guarantee safety. If the self driving car has a “miss”, it could have a fatal outcome. Although something like self-driving cars need to have an extremely accurate signal detection system, other systems that don’t impose danger will not have to meet those types of thresholds.
Task Analysis is something that we do everyday without even realizing it. For instance, think about your morning regimen before class or work. From our perspective, we have our own routine, but imagine if someone had to replicate it exactly? It may not seem like it would require that many instructions, but you would be surprised by the level of detail needed to accurately convey certain actions.
In our ENP162 class, we conducted a three part exercise where our group constructed a skier from pipe cleaners, foam balls, and toothpicks. This process consisted of several moments of improvisation in order to make the skier stand properly, and lacked any real technique. Therefore, when we wrote up the instructions on how to reproduce our skier, it was challenging.
From the picture above, you can see our family of skiers. One of the groups was able to successfully create a skier from our instructions, but we could have definitely been more thorough. This experience demonstrates that any process regardless of the complexity requires some form of sufficient task analysis, enabling anyone to complete it.
I challenge you to think of something that you do everyday, write up the process from beginning to end in sequential steps, and ask your friend to do it. If I were to do this exercise, I would do a task analysis on cleaning my room, since it is a long and somewhat arduous process. Additionally, I really need my room cleaned, so it directly benefits me.
What will be the role of task analysis in the future? From my last blog, we know that automation is gaining momentum and being employed in different industries. Therefore, a formal task analysis will need to be written up and programmed in these AI in order to account for precision and accuracy.