Blog #9: The Future of Chat Bots and Consumerism

Last Friday, the annual opener to holiday shopping began with the start of Black Friday. If any field has seen the use of chat bots take place, it’s been with consumer goods. Customer service more often than not is facilitated by these bots that help the consumer with all their needs.

While customer service is the primary example of how we’ve interacted with these bots in consumerism, I suspect their presence to take place and interact within malls and shops in the near future. I project the perks of online shopping, like seeing reviews and suggestions, will begin to take hold in physical spaces with the use of these bots. For example, imagine going up to a clothes rack, selected a shirt, then having a bot facilitate a series of questions to help you decide whether or not to buy it. These bots could expand beyond customer service into becoming literal shopping assistants.

So how have, and will, people react to the takeover of chatbots in place of humans? According to a report published by the Capgemini Research Institute, not only do consumers prefer conversational agents and chat bots to human service, but they actually really enjoy using these products. I personally like using my Alexa and Siri on my phone, relying on these technologies to help me buy goods by putting them in my cart for me. If these chat bots continue to be used and enjoyed by people, I wouldn’t be surprised to see their application far beyond just consumer goods, and in applications far beyond what we would ever expect.

Blog #8: IOT and Home Security

An interesting, and rapidly growing application of IoT is in home security. In 2016, Business Insider predicted that by 2020, 193 million homes would have IOT smart home applications in use in the United States.

The obvious benefits of a home with IoT are the connectedness of control systems: thermostats, lights, electronics, doorbells and home security cameras. Amazon Echo and the Nest use smart technologies to adjust house settings according to the user’s preferences. The ease of these technologies permits a sense of comfort by homeowners in knowing their houses are being managed by these systems.

Another interesting benefit to the use of IoT devices is the conservation of energy. While humans may forget to turn the lights off, these products won’t. They can adjust lighting when the homeowners are away, and allows users to turn off lights remotely. Thermostats can be adjusted to conserve heat and electric bills.

I’m curious what the next frontier for homes and IoT will be. Will we need dog sitters? What about baby sitters? Could we train these technologies to watch animals, and even babies? While the adoption of this technology for home management is taking hold, I’m curious how far we will go with the maintenance of house, and if this will ever pass into the realm of the care of our loved ones in these homes.

Portfolio Assignment #4 – Driver’s EDward

Zaila Foster, Neil Haigler, and Fallon Shaughnessy

Product Brief

Driver’s EDward! This driver’s education providing social robot will teach student drivers, make their learning less stressful, and step in to protect them if they end up in a dangerous situation. We expect this to be a feasible reality in 20 years when all new cars and most cars on the road are self-driving. Our social robot will be purchased in the form of a small box that plugs into the desired car and will interface with the car’s self-driving capabilities and heads up display. The heads up displays, or HUD, will still show important information in the center of the student’s field of view, but it will also show our robot’s avatar. As a social robot, our system will have to have something that resembles a human for our user to interact with, and in this situation we feel that a non-physical representation of a robot in the HUD is much safer than an actual robot in the passenger seat, or even an actual human in the passenger seat. With a robot teacher in the HUD, the robot can express the fear or joy that the student’s driving evokes. Besides just showing emotion, the robot will be able to be a constant source of encouragement and reliability for the student. As the car is already self-driving, our robot can jump in to correct for any mistakes that get to the point of being dangerous, and the monitor of the vehicle can be used to show the student what they should/should have done in any situation. All in all, we feel that a well programed robot could be far better than any driver’s ed instructor, and can offer the standardization, reassurance, patience and succinctness that we all wish we had in our instructors while we went through this stressful rite of passage. 

Background

We set out to make Driver’s EDward for many reasons. Learning to drive is not a standard procedure, almost every state has a different set of hoops to jump through to advance from getting your learner’s permit to getting your driver’s license. There also appears to be just as much variation in the instructors as well. After several interviews with people who had gone through driver’s education with a human instructor, we had a list of instructor horror stories ranging from unclear instructions, cocaine use sale and arrest, to violent outbursts and unprofessional behavior. In direct response to the issues we found through interviews, we believe that a robot will be able to give clear auditory and visual instructions, that a robot will not set up a driver’s ed school as a front to use and sell cocaine, and that a robot will not be reduced to screaming and punching their dashboard when a student makes innocent mistakes at 5 miles per hour in a parking lot. 

We also see that student drivers starting with no experience is stressful and dangerous enough. While drivers from 16-20 have the lowest rate of accidents of any age group, they also have the highest rate of fatal accidents. This early stage of development and reflex learning, of turning higher brain decisions into automatic life-saving responses that require no thought, takes time and practice. Driving is a very unnatural act as far as our evolution is concerned, and it takes time for us to turn something like slamming on the brakes or swerving to avoid collision to be a response that we do not need to think about doing. In a lecture and interview with Dr. Divya Chandra, a trained pilot and expert on standardization of instrument procedures and pilot human factors, she explained that one of the most important things that an instructor does is letting a student make mistakes in a safe way. This is because it teaches the student why certain rules are followed, and it allows them to experience mistakes and have raw experience correcting them, which builds life saving reflexes. We believe that having a robot be able to take the wheel at the last second to protect and correct the student will be not only safer, but more educational since it will be more patient with students and allow students to make mistakes in a perfectly risk calculated way while always being able to correct when needed.

User Physicality (Limitations, Differences, Aspirations)

If we are to make a robot that will be interacted with in a high stress situation, it must not cause physical stress. The physical limitations of humans have mostly been taken into account in the design of the car. However, although the robot will not be physically interacted with, it could cause significant strain on the eyes, and it could distract the driver more than teach them. We took into account the drivers field of view and where they would need to be able to see, and made the avatar of our robot visible but unobtrusive. We also designed it to be the right brightness to prevent eye strain while still being easily visible. We had to do extensive research on human peripheral vision and the nature of rods and cones of the eyes. Human peripheral vision is primarily reliant on rods, which are very sensitive to contrasts of black and white, as opposed to cones, which are clustered mostly in the fovea centralis, or the part of our eye responsible for the center of the vision. While the center of our vision and our cones are very good at detecting colors and color contrast, our peripheral vision is better suited to black and white contrast and motion. Because of this, we decided to make our robot avatar outline black and white because it is in the peripheral, allowing it to be passively observed. However, other research shows that amber and red are better for getting attention in the event of an emergency, and will be part of any alert system or intervention. To achieve a good calming balance of mood and color, we decided to make the outline of the avatar be black for earlier stated contrast, but for the body to be blue. 

User Psychology (Desires, Constraints, Fears

Learning to drive is a very stressful experience for everyone involved, but especially the students. Our human-robot interface was designed to be trust earning, supportive, and empowering from the very beginning. We had to take into consideration not only a student’s mental misgivings about a robotic teacher, but also their goals and their foundation for future learning. We would need to first focus on fear and trust building. We believe that any rational person is going to need to be assured that their teacher is competent, and this will not be the same as mom or dad teaching their teenager to drive. Said student will have grown up with their parents driving them around, and their parents will hopefully have earned their trust. For our robot to earn trust, we decided that the best opportunity for this would be for driver’s eduction, normally taught online, to be taught in the car. This will allow the robot to be perceived as a completely competent driver from the very beginning, and not just an emergency system that might intervene clumsily. This will also allow the robot to show rather than tell when teaching, as students learn best when more channels of input and explanation are utilized. Rather than pointing at a whiteboard or a picture on a screen in a classroom, the robot and its monitor will be able to physically demonstrate correct driving, and use the screen to display hypotheticals and accident scenarios. If there is an encounter where the student and their robot instructor encounter someone else’s poor driving that involves honking and blaming the innocent student robot team, something that could terrify and induce crippling stress, the robot can use the experience as a teachable moment, and show what the other driver was doing wrong, and help the student understand why emotional driving and road rage are dangerous.

During the next phase we needed to consider the psychology of learning and stress. These students would have had time to learn to trust the instructor and would now be practicing everything they had learned. This will bombard the student with stress, embarrassment, and fear, as well as testing their determination and abilities. We would need to design a system that achieved a healthy balance between stress and learning opportunities. Rather than attempting to predict what that perfect balance is, we elected to use machine learning to customize every student’s robot to achieve that balance. Sensors will be able to monitor pupil dilation, heart rate and other stress indicators which will allow dynamic learning and customization of the system. The system will be able to test different attitudes and languages for intervening and correcting mistakes and compare the amount of stress induced with the promptness of response. Over a long enough time the system will also be able to compare the language used to how well the student remembered the lesson and be able to weigh stress and the importance of the lesson or mistake made. 

As students begin to remember lessons and learn from their mistakes, the system will give them encouragement and praise. While research shows that automated and “canned” praise is less meaningful than praise from a real person, our system will be able to praise students with the normal “good job”s, but also in a more meaningful way by recording and displaying student improvement. We believe that the statement “you relied on the autopilot 50% less today, and perfectly navigated that rotary” will mean a lot to our students. This will build confidence and lead to the student getting more comfortable and needing the robot less.

The ultimate goal is complete autonomy for a student who does not need a robot looking over their shoulder all the time. While most social robots are designed to ensure more interaction and invite use, ours will start like and eventually fade out as the student gets more proficient. The robot will still be able to jump into presence and control at any dangerous moment, but the goal of a driver’s license, at least according to the law right now, is to be able to safely drive a vehicle that has no automated systems, and students need to be prepared for that. Weening the student off of the Drivers EDward will ensure that they are not over dependant on automated systems, an issue that  Dr. Divya Chandra mentioned. She expressed the dangers of pilots using autopilot for every flight and how dependant pilots get and how quickly their skills atrophy. Our goal was to make a teacher, not an autopilot, and need to be constantly working towards that goal. By going through these three stages of learning, we believe that we will help drivers be safer, more prepared, and ready to drive on their own. We believe that Drivers EDward is the future of driving!

Task Analyses

Link to full page:
https://drive.google.com/file/d/1Fgz5FijNz6EymXm4QH4eyymIDm_OF9UP/view?usp=sharing

Solutions

Sensors

Our system utilizes multiple sensors in order to guide the driver. Cameras for both the front and back of the vehicle give Driver’s EDward the best visibility to provide instruction. Radar is used to detect other cars or pedestrians, as well as stationary targets like parked cars and buildings. This is necessary in order to add safety measures when educating new drivers, as well as to direct them in each task. Lidar measures distance, and will help our robot to measure distances. In conjunction with radar, this permits Driver’s EDward to make educated decisions about the environment of the driver, what’s in it and how far artifacts are from the car.

Processors

Our social robot comes in a small box, plugs into the USB port and then shows up in the form of an avatar through heads up display. This robot will be shown within the central vision of the driver. Our robot has the following key features:

  1. Sensors: cameras, radars, and lidars that inform the robot’s decision making
  2. Artificial Intelligence: two way communication, auditory controls and informed advice based on learning
  3. Emotive Display: the HUD provides a digital avatar that resembles a face that reacts to situations and interactions with the driver

Emotive displays

Car Set-Up

Audio Message: The system is loading. You will hear instructions for the pre-driving shortly. Please do not turn on the car until you are instructed to do so.

Welcome Screen

Audio Message: Hi Jane, Welcome to your driving lesson. Now that you’ve completed your pre-driving check, we can get started.

Switching Lanes

Audio Message: Now we’re going to practice switching lanes. The steps for this include 1) turning on your signal, 2) checking your mirrors, 3) checking your blind spot by looking over your shoulder 4) changing lanes if it is safe, 5) and lastly, turning off your signal after completing the lane change. Are you ready to start?

Parallel Parking

Audio Message: Now we’re going to practice parallel parking. Signal and come to a complete stop next to the highlighted car.

Audio Message: Signal and turn your wheel all the way to the left.

User Walkthrough

Link to full page view: https://drive.google.com/file/d/1U3zRfYPAMMaGftW6gus2xrCfgta5YRaA/view?usp=sharing

Future Directions and Limitations

A significant limitation for Driver’s EDward is that it requires people to fully adopt the idea that self-driving vehicles are safer than human-operated vehicles. Moreover, it requires further development of self-driving cars. Currently, the state of “self-driving” cars is advanced assisted driving, and a person with only a learner’s permit would not be legally allowed to operate a “self-driving” car by themselves. Another limitation of Driver’s EDward is that it is built with the expectation that in the future, all cars will have a digital display and be compatible with our software. 

Moving forward, Driver’s EDward should be able to replace driving instructors completely. This process will feel strange at first but will overall improve safety. Driver’s EDward will allow for a standardized method of teaching driving and can be updated frequently as the technology in vehicles changes.

Blog Post #7 – The Future of GPS

I remember as a little kid when we download Google Earth on our desktop in the computer room. My siblings and I were completely enamored by this technology. We looked up our address, our friends’ addresses, and spent the afternoon zooming in on different places around the world.

Ten years later, GPS is everywhere. Through apps on my phone, I’m never without knowledge of my close friends’ whereabouts, of how close I am to the nearest Uber, and the knowledge of the exact location of my keys and laptop. In my photos app, I can see a map of my 10,000 photos and where they were taken. The days of maps and printing out directions are long gone.

This technology is only getting better, with the launch of the newest satellites, GPS III, expected in 2023. These satellites are projected to be three times more accurate than our current ones. While this is exciting, it also makes me question how this technology can, and inevitably will, be abused. Are we eliminating privacy altogether? Who will get access to our locations, and how can we limit access if needed? What do we lose if we lose anonymity?

Privacy and technology prove to show an interesting tradeoff; as we get more technologically advanced, we seem to sacrifice some protections of our privacy. Whether this be in terms of data on our tendencies, or now with GPS, our literal whereabouts. I don’t know how we should handle all this, but I’m both fascinated and unsettled by its potential.

Blog Post #6 – Social Robots and Emotive Displays

Social Robots

Last year, the American Psychological Association published an interesting cover story on the future of robots in our world as social beings and the psychology behind this technology. The article proposes the inevitability of these robots existence in the near future, and well as the need of humans to see these robots as “someone’s rather than something’s”.

Their applications are diverse: social robot prototypes are beginning to show up in customer service, education, and even places as fundamentally human as companionship and therapy assistant roles. In many ways these robots provide benefits to our way of life; we can program them with human tendencies to provide human interaction. However, this same benefit could be problematic. If we replace people with robots, are we limiting or removing real human interaction altogether?

I think that as designers we have to be aware of this fine line when making social robots. Ideally these devices should benefit humans without any drawbacks. Of course this is a romanticized goal, but I do think we can take considerable measures to ensure the safety of these robots for human use. We should test often, consider the user experience, and set guidelines and regulations for how these robot should be implemented. The inevitability of their existence is a sure thing; making sure they’re safe should be too.

Portfolio Assignment #3 – Augmented Reality Shopping Assistant

Belen Farias, Jonah Loeb, Fallon Shaughnessy

Illustration by Martin Laksman for MONEY: http://money.com/money/5024470/the-store-of-the-future/

Shopping can be great. It can also be miserable. Amazon has made its fortune on recognizing the latter. Online shopping has increasingly taken over traditional mall browsing, as users enjoy the seamlessness of the online consumer experience from the comfort of their homes. However, online shopping comes with its drawbacks. Primarily, the clothes are only shown through pictures digitally. Shoppers lose the ability to try on their items in real time, to feel the prospective clothing options in real time. Our future human will have the customizability of the online experience while in stores, through the use of our personal shopping assistive device.

The future shopping assistant is an augmented reality device designed to enhance the in-person retail experience in malls by transferring the same powerful recommendation pipelines used in online shopping to the real world. Taking the form of either contact lenses, the future shopping assistant recommends products and stores by highlighting them from their surroundings. Our device then can provide pricing, reviews, and source information for any product the user picks up.

Personae

Hayden is a middle schooler who is passionate about keeping up with popular culture. Hayden and her friends often browse Instagram and VSCO to see the latest fashion trends. She enjoys following these popular styles and likes to know where she can find the newest, most popular clothing in the stores where she shops. Hayden, likewise to her peers, looks to conform her style to what’s considered liked by others her age. She worries what her friends think of her, and would like to know if a shirt is cool and trendy before she buys it.

John is a recent college graduate who has landed his first job at a bank in San Francisco. With a newfound income, John is in the market to find new work attire that can complement the clothes his mom bought him for graduation. John considers himself a shopping novice, so he desires a shopping experience that would allow him to compare brands easily. John prefers to try on his clothes and feel them before buying. However, he works long hours and is often tired after work, so time is of the essence.

Veronica is the director of an art gallery in New York City. She is well established in the art community, with over thirty years experience in the industry. Veronica cares a lot about her aesthetic; She describes her fashion sense as “professional with a flair”. She takes pride in the unique-ness of her wardrobe and spends hours handpicking pieces that cater to her taste. The last thing she wants is a cookie cutter wardrobe, as she sees her attire as a reflection of her gallery and brand.

Our Solution

The recommendation pipeline takes a holistic approach to determine the best products. It factors in the users shopping behavior, personality, and style along with current fashion trends, reviews, and sales (for a more detailed breakdown, see the machine learning input/output diagram). The assistant takes into account the social aspect of in-person shopping. When shopping in a group, the assistant will recommend stores that have something for everyone and can recommend products that go together, should the group want to coordinate.

Click here to view diagram on full screen

If the user would like a demonstration of the product to see, for example, how an article of clothing fits, the future shopping assistant can produce a virtual model of the user wearing the clothes. The user can dress the model in the clothes from their closet back home or from the store to compare and can share the model with the others in the store and online.

Click here to view diagram on full screen

Interactions

MODES

  • Mode 1: For users who need guidance on how to shop at specific stores, want recommended products
  • Mode 2: For users who want help finding a specific item in a store. 
  • Mode 3: For users who want to style themselves on the go, can access their closet and products without having to be physically near it. Can create outfits and store them for later.

ALERTS

Users will have the option to choose what information they are alerted about. The built-in notifications include:

  • If the item wasn’t available when you wanted it, it will alert you when it is. 
  • Notifies users of new products available that fit their criteria (style/budget). 
  • Alerts when stores have sales (in order to promote product use, could possibly team up with companies to offer discounts to users). 
  • Recommends products based on people whose style you’re interested in.
  • Alerts you when new outfit combinations and style choices are available as a form of stylistic advice. 
  • Reminds users to update their choices (can swipe through recommended styles and brands in order to improve algorithm).

INPUTS

  • Have to take pictures of the clothes/items you own in order to train the device. Mark your favorite clothing items. 
  • Have to fill out a survey on the colors, textures, patterns, and combinations you are interested in. 
  • Have to let product know what season you are looking for and if for a specific event. 
  • Record body measurements and ideal fit. 
  • Ideal price tag, overall budget for monthly clothing spending. 
  • Note brands and items you are specifically interested in. 
  • Import photos/styles you prefer.

User Walkthrough

The user would first buy the contact lenses.

They would then have to input all of their data (see inputs above).

User then selects mode they are interested in.

For Mode 1:

  • User is currently at a shopping center and needs help selecting different styles 
  • The device would give recommendations based on their prior inputs 
  • You are given a list of possible stores and given directions on how to get there 
  • i.e. You walk into a store like “Zara”
    • The lens would highlight clothes that you would potentially be interested in. You are able to see the star rating and reviews for the product.
    • “Style Me” button shows the possible combination of the product with existing items in your closet.  You are able to swipe through the combinations.
    • The user is asked if they are satisfied with the recommendation, which would therefore improve machine learning.
    • The lens also displays images of others and how they’ve used the item. 
    • It is able to mix and match the item with others in the store and create potential outfits. 
  • Once the product is bought, confirm on the lens to be added to the list of existing items

For Mode 2: 

  • User needs help finding an item near them.
  • The lens would ask user to input what they are searching for and use their location data to display the item near them. 
  • A map would then be displayed with directions on how to get to the item 
  • Once the item is bought, it would be added to the list of items. 

For Mode 3: 

  • The user would click on the “my closet” tab 
    • This feature has an image of the user where their clothes can be overlaid on their body
  • Allows the user to mix and match different combinations of outfits 
  • Can save these combinations for future use

How We Get from Here to There

The technology we would incorporate in our shopping assistant is already emerging. Augmented reality, and specifically augmented smart glasses have already been produced by companies like Apple, Toshiba, and Epson. A lot of products currently on the market boast GPS, motion sensors, video and picture capabilities. Major drawbacks come in battery life and reliability. However, these early prototypes are promising in how we utilize augmented reality to transform a space.

Our product is inspired by the machine learning used by the current models used in online shopping. Particularly, we look to build on the ability to filter clothing selections based on the person, as well as provide the social component of peer reviews and trends. Amazon has begun to explore technology in the fashion domain, particularly with the creation of the Echo Look. The Echo Look provides style suggestions by analyzing what you’re wearing. With the combination of a high quality camera and machine learning algorithms that look at fit, color, and style, the Echo Look acts as a fashion advisor that responds as fast as within the minute.

We are beginning to see emerging technology that could eventually contribute to the product we propose for the future human. With the collaboration of machine learning and high functioning sensors and imaging used in augmented reality, the shopping assistant provides customizable experiences to consumers in a future that may not be as far off as we think. We project this reality could be here in the next fifty years.

Ethics and Society

  • Everyone may not be comfortable using wearable devices: users would not have the ability to stop the display of targeted ads 
  • People may not feel comfortable with the idea of having their style chosen for them: Some shoppers may prefer an entirely hands-on shopping experience without assistance.
  • Accessibility: Those who buy this item would probably be able to afford it, creating a big gap in the socio-economically.
  • Depending on the way the AI is trained, it may not recommend appropriate styles for people of color since there are not a lot of models of color online 
    • Could therefore be biased in the types of clothes it recommends
    • Might not be great for people that don’t fit the normal beauty standards
  • May not be ethical to increase consumption so much: what does this mean for fast fashion?

Future Directions and Limitations

The personal shopping assistant we created utilizes augmented reality to transform the inside of shopping spaces. Because of the nature of augmented reality as an enhancement to an environment, we are limited in how the physical space is structured. For example, stores layouts can differ drastically in terms of organization. Some stores may have wide open spaces, while others are cluttered. Some stores could be very busy and loud, while others are empty and quiet. The user’s experience can vary based on the variables present in these settings, which is a drawback to our system which boasts customizability.

Therefore, a potential next step would be to create an entirely virtual environment that mimics the real store completely. This would permit users to have their own settings independent of other shoppers or environmental influences. This would be the ultimate blending of the online shopping experience into a physical environment, and would be the union of the in store and digital consumer experiences.

Blog Post #5 – The Evolution of Customer Service

Robots with headphones (done in 3d)

I recently bought a product that needed to be returned. When I went on the company’s website, I was directed to a chat conversation. At first, I completely thought this representative I was conversing with was human. The conversation was dynamic, and the employee responded casually and accordingly to my needs. However, the representative later asked me if I needed to be transferred to a human representative, which became the first time in the entire interaction that I became aware that I was messaging with a robot.

We expect robots to be found in the most grandeur of inventions – cars, planes, weapons even. When we think of robots we often think of high tech gadgets. However, the revolution has began in the day to day interactions we often overlook. Machine learning has permitted us to teach robots how to dynamically respond to our needs, and to carry conversations based on patterns. I think it’s fascinating how companies have utilized machine learning to create help centers directed by these automated systems. Alexa’s and Google Homes are now common entities in our households. While I look forward to where machine learning goes in enhancing our big technological advancements, I’m also fascinated by how our worlds are slowly revolving around these less flashy ones.

Blog Post #4 – The Future Human & Sport

When we speak of the future human, we often talk about how we can enhance our capabilities. If only we had super vision, hyper speed, or quicker minds, we could become the super humans, or superheroes, we have all grown up admiring. Something I often think about is, if we do come to the point where technology is extending the human capability to superman-like powers, how will this inherently change our society?

The Olympics have long been the global event that celebrates the capability of the human body at the highest level. Athletes in their respected fields are admired, and those at the very top reach international fame. Use of performing enhancing drugs are prohibited, and those caught doping lose their medals and their popularity. Steroids are tested for in every sport. In this case, extending the human capability is a cheat to the natural body.

However, as we progress in science for our quest of this future human, we risk blurring the lines of what’s natural and what’s not. What will we embed in our bodies? How will we alter our make-up, and how do we determine whether an athlete is clean or not?

These concerns are already here with the introduction of gene doping. Gene doping permits athletes to increase muscle mass and strength significantly through gene altercation. Originally made for people with injuries or disabilities, gene doping has surfaced illegally in competition already. The Olympic Committee has already invested in creating testing techniques to catch gene dopers, and speaks vehemently against its use.

I wonder how long we can regulate sport, as we already have plenty of cases of cheaters that slip through the cracks. What will the future Olympian look like? Maybe we throw in the towel in terms of testing and call it a free for all — use whatever you’d like to win, super human vs. super human. May the best man-machine-technology win.

Blog Post #3 – Signal Detection Theory and Explosive Sniffing Dogs

Signal detection theory exemplifies not only how accurate we are at picking up on a stimuli, but also how well we excel at doing so given the individual circumstances. What signal detection theory describes is how each situation can be unique – in terms of severity in misses and false alarms as well as in levels of capability in actually picking up the appropriate signal.

Humans aren’t perfect at reading in our environments. We’re far from it. So when thinking of how automation can extend human capabilities to read in signals, I thought of how we have turned to other alternatives outside of machines to enhance our detection of stimuli. Particularly, my mind went to our use of explosive sniffing dogs.

Dogs make for the perfect candidate given such a task. They are loyal and trainable, and have the smelling hypersensitivity that we lack. We can, and have, catered the training of dogs to contribute both domestically and internationally to our protection in the most stressful of situations.

So do we need automation in this domain when we have dogs? It’s certainly worth investigating. In the last couple weeks, multiple news outlets have reported the neglect and mistreatment of these dogs overseas in Jordanian kennels.Shocking images reveal these highly trained and skilled dogs starving of death and heat stroke, and living in shockingly destitute conditions. Neglect is abundant, as the U.S. continues to ship more of these dogs overseas.

While dogs may seem like a strong alternative to humans to sniff explosives, I argue automation should be our next frontier in this domain. If we aren’t willing to take care of our dogs, we have no business using them. Can we mimic the hunting instincts and smelling abilities in machines? I hope so, it’s at the very least worth exploring.

Blog Post #2 – Cultural Differences & Task Analysis

In undergrad, I was able to take a course on childhood development across cultures. What I learned from the course and the literature we read was how simultaneously cultures could be so vastly different and the same at doing the same things. A large take-away was how, while we tend to think of how we personally perform tasks as the best, this is often not the case. Cultures may differ in how they perform similar actions, as simple as driving or eating, or as complex as how we operate our governments and educational systems. However, they all serve a general purpose to enhance our quality of life.

Grossman et. al. studies explored cognitive differences among eastern and western cultures and the various factors that could lead to these differences, such as genetics and linguistics as well as societal structure. I think it would be interesting to explore task analyses of every day routine, like driving and eating, across cultures. Maybe we could learn ways to enhance the methodologies we have so deeply embedded into our own society into more dynamic and efficient ones.