Blog Post #7 – The Future of GPS

I remember as a little kid when we download Google Earth on our desktop in the computer room. My siblings and I were completely enamored by this technology. We looked up our address, our friends’ addresses, and spent the afternoon zooming in on different places around the world.

Ten years later, GPS is everywhere. Through apps on my phone, I’m never without knowledge of my close friends’ whereabouts, of how close I am to the nearest Uber, and the knowledge of the exact location of my keys and laptop. In my photos app, I can see a map of my 10,000 photos and where they were taken. The days of maps and printing out directions are long gone.

This technology is only getting better, with the launch of the newest satellites, GPS III, expected in 2023. These satellites are projected to be three times more accurate than our current ones. While this is exciting, it also makes me question how this technology can, and inevitably will, be abused. Are we eliminating privacy altogether? Who will get access to our locations, and how can we limit access if needed? What do we lose if we lose anonymity?

Privacy and technology prove to show an interesting tradeoff; as we get more technologically advanced, we seem to sacrifice some protections of our privacy. Whether this be in terms of data on our tendencies, or now with GPS, our literal whereabouts. I don’t know how we should handle all this, but I’m both fascinated and unsettled by its potential.

Blog Post #6 – Social Robots and Emotive Displays

Social Robots

Last year, the American Psychological Association published an interesting cover story on the future of robots in our world as social beings and the psychology behind this technology. The article proposes the inevitability of these robots existence in the near future, and well as the need of humans to see these robots as “someone’s rather than something’s”.

Their applications are diverse: social robot prototypes are beginning to show up in customer service, education, and even places as fundamentally human as companionship and therapy assistant roles. In many ways these robots provide benefits to our way of life; we can program them with human tendencies to provide human interaction. However, this same benefit could be problematic. If we replace people with robots, are we limiting or removing real human interaction altogether?

I think that as designers we have to be aware of this fine line when making social robots. Ideally these devices should benefit humans without any drawbacks. Of course this is a romanticized goal, but I do think we can take considerable measures to ensure the safety of these robots for human use. We should test often, consider the user experience, and set guidelines and regulations for how these robot should be implemented. The inevitability of their existence is a sure thing; making sure they’re safe should be too.

Portfolio Assignment #3 – Augmented Reality Shopping Assistant

Belen Farias, Jonah Loeb, Fallon Shaughnessy

Illustration by Martin Laksman for MONEY: http://money.com/money/5024470/the-store-of-the-future/

Shopping can be great. It can also be miserable. Amazon has made its fortune on recognizing the latter. Online shopping has increasingly taken over traditional mall browsing, as users enjoy the seamlessness of the online consumer experience from the comfort of their homes. However, online shopping comes with its drawbacks. Primarily, the clothes are only shown through pictures digitally. Shoppers lose the ability to try on their items in real time, to feel the prospective clothing options in real time. Our future human will have the customizability of the online experience while in stores, through the use of our personal shopping assistive device.

The future shopping assistant is an augmented reality device designed to enhance the in-person retail experience in malls by transferring the same powerful recommendation pipelines used in online shopping to the real world. Taking the form of either contact lenses, the future shopping assistant recommends products and stores by highlighting them from their surroundings. Our device then can provide pricing, reviews, and source information for any product the user picks up.

Personae

Hayden is a middle schooler who is passionate about keeping up with popular culture. Hayden and her friends often browse Instagram and VSCO to see the latest fashion trends. She enjoys following these popular styles and likes to know where she can find the newest, most popular clothing in the stores where she shops. Hayden, likewise to her peers, looks to conform her style to what’s considered liked by others her age. She worries what her friends think of her, and would like to know if a shirt is cool and trendy before she buys it.

John is a recent college graduate who has landed his first job at a bank in San Francisco. With a newfound income, John is in the market to find new work attire that can complement the clothes his mom bought him for graduation. John considers himself a shopping novice, so he desires a shopping experience that would allow him to compare brands easily. John prefers to try on his clothes and feel them before buying. However, he works long hours and is often tired after work, so time is of the essence.

Veronica is the director of an art gallery in New York City. She is well established in the art community, with over thirty years experience in the industry. Veronica cares a lot about her aesthetic; She describes her fashion sense as “professional with a flair”. She takes pride in the unique-ness of her wardrobe and spends hours handpicking pieces that cater to her taste. The last thing she wants is a cookie cutter wardrobe, as she sees her attire as a reflection of her gallery and brand.

Our Solution

The recommendation pipeline takes a holistic approach to determine the best products. It factors in the users shopping behavior, personality, and style along with current fashion trends, reviews, and sales (for a more detailed breakdown, see the machine learning input/output diagram). The assistant takes into account the social aspect of in-person shopping. When shopping in a group, the assistant will recommend stores that have something for everyone and can recommend products that go together, should the group want to coordinate.

Click here to view diagram on full screen

If the user would like a demonstration of the product to see, for example, how an article of clothing fits, the future shopping assistant can produce a virtual model of the user wearing the clothes. The user can dress the model in the clothes from their closet back home or from the store to compare and can share the model with the others in the store and online.

Click here to view diagram on full screen

Interactions

MODES

  • Mode 1: For users who need guidance on how to shop at specific stores, want recommended products
  • Mode 2: For users who want help finding a specific item in a store. 
  • Mode 3: For users who want to style themselves on the go, can access their closet and products without having to be physically near it. Can create outfits and store them for later.

ALERTS

Users will have the option to choose what information they are alerted about. The built-in notifications include:

  • If the item wasn’t available when you wanted it, it will alert you when it is. 
  • Notifies users of new products available that fit their criteria (style/budget). 
  • Alerts when stores have sales (in order to promote product use, could possibly team up with companies to offer discounts to users). 
  • Recommends products based on people whose style you’re interested in.
  • Alerts you when new outfit combinations and style choices are available as a form of stylistic advice. 
  • Reminds users to update their choices (can swipe through recommended styles and brands in order to improve algorithm).

INPUTS

  • Have to take pictures of the clothes/items you own in order to train the device. Mark your favorite clothing items. 
  • Have to fill out a survey on the colors, textures, patterns, and combinations you are interested in. 
  • Have to let product know what season you are looking for and if for a specific event. 
  • Record body measurements and ideal fit. 
  • Ideal price tag, overall budget for monthly clothing spending. 
  • Note brands and items you are specifically interested in. 
  • Import photos/styles you prefer.

User Walkthrough

The user would first buy the contact lenses.

They would then have to input all of their data (see inputs above).

User then selects mode they are interested in.

For Mode 1:

  • User is currently at a shopping center and needs help selecting different styles 
  • The device would give recommendations based on their prior inputs 
  • You are given a list of possible stores and given directions on how to get there 
  • i.e. You walk into a store like “Zara”
    • The lens would highlight clothes that you would potentially be interested in. You are able to see the star rating and reviews for the product.
    • “Style Me” button shows the possible combination of the product with existing items in your closet.  You are able to swipe through the combinations.
    • The user is asked if they are satisfied with the recommendation, which would therefore improve machine learning.
    • The lens also displays images of others and how they’ve used the item. 
    • It is able to mix and match the item with others in the store and create potential outfits. 
  • Once the product is bought, confirm on the lens to be added to the list of existing items

For Mode 2: 

  • User needs help finding an item near them.
  • The lens would ask user to input what they are searching for and use their location data to display the item near them. 
  • A map would then be displayed with directions on how to get to the item 
  • Once the item is bought, it would be added to the list of items. 

For Mode 3: 

  • The user would click on the “my closet” tab 
    • This feature has an image of the user where their clothes can be overlaid on their body
  • Allows the user to mix and match different combinations of outfits 
  • Can save these combinations for future use

How We Get from Here to There

The technology we would incorporate in our shopping assistant is already emerging. Augmented reality, and specifically augmented smart glasses have already been produced by companies like Apple, Toshiba, and Epson. A lot of products currently on the market boast GPS, motion sensors, video and picture capabilities. Major drawbacks come in battery life and reliability. However, these early prototypes are promising in how we utilize augmented reality to transform a space.

Our product is inspired by the machine learning used by the current models used in online shopping. Particularly, we look to build on the ability to filter clothing selections based on the person, as well as provide the social component of peer reviews and trends. Amazon has begun to explore technology in the fashion domain, particularly with the creation of the Echo Look. The Echo Look provides style suggestions by analyzing what you’re wearing. With the combination of a high quality camera and machine learning algorithms that look at fit, color, and style, the Echo Look acts as a fashion advisor that responds as fast as within the minute.

We are beginning to see emerging technology that could eventually contribute to the product we propose for the future human. With the collaboration of machine learning and high functioning sensors and imaging used in augmented reality, the shopping assistant provides customizable experiences to consumers in a future that may not be as far off as we think. We project this reality could be here in the next fifty years.

Ethics and Society

  • Everyone may not be comfortable using wearable devices: users would not have the ability to stop the display of targeted ads 
  • People may not feel comfortable with the idea of having their style chosen for them: Some shoppers may prefer an entirely hands-on shopping experience without assistance.
  • Accessibility: Those who buy this item would probably be able to afford it, creating a big gap in the socio-economically.
  • Depending on the way the AI is trained, it may not recommend appropriate styles for people of color since there are not a lot of models of color online 
    • Could therefore be biased in the types of clothes it recommends
    • Might not be great for people that don’t fit the normal beauty standards
  • May not be ethical to increase consumption so much: what does this mean for fast fashion?

Future Directions and Limitations

The personal shopping assistant we created utilizes augmented reality to transform the inside of shopping spaces. Because of the nature of augmented reality as an enhancement to an environment, we are limited in how the physical space is structured. For example, stores layouts can differ drastically in terms of organization. Some stores may have wide open spaces, while others are cluttered. Some stores could be very busy and loud, while others are empty and quiet. The user’s experience can vary based on the variables present in these settings, which is a drawback to our system which boasts customizability.

Therefore, a potential next step would be to create an entirely virtual environment that mimics the real store completely. This would permit users to have their own settings independent of other shoppers or environmental influences. This would be the ultimate blending of the online shopping experience into a physical environment, and would be the union of the in store and digital consumer experiences.

Blog Post #5 – The Evolution of Customer Service

Robots with headphones (done in 3d)

I recently bought a product that needed to be returned. When I went on the company’s website, I was directed to a chat conversation. At first, I completely thought this representative I was conversing with was human. The conversation was dynamic, and the employee responded casually and accordingly to my needs. However, the representative later asked me if I needed to be transferred to a human representative, which became the first time in the entire interaction that I became aware that I was messaging with a robot.

We expect robots to be found in the most grandeur of inventions – cars, planes, weapons even. When we think of robots we often think of high tech gadgets. However, the revolution has began in the day to day interactions we often overlook. Machine learning has permitted us to teach robots how to dynamically respond to our needs, and to carry conversations based on patterns. I think it’s fascinating how companies have utilized machine learning to create help centers directed by these automated systems. Alexa’s and Google Homes are now common entities in our households. While I look forward to where machine learning goes in enhancing our big technological advancements, I’m also fascinated by how our worlds are slowly revolving around these less flashy ones.

Blog Post #4 – The Future Human & Sport

When we speak of the future human, we often talk about how we can enhance our capabilities. If only we had super vision, hyper speed, or quicker minds, we could become the super humans, or superheroes, we have all grown up admiring. Something I often think about is, if we do come to the point where technology is extending the human capability to superman-like powers, how will this inherently change our society?

The Olympics have long been the global event that celebrates the capability of the human body at the highest level. Athletes in their respected fields are admired, and those at the very top reach international fame. Use of performing enhancing drugs are prohibited, and those caught doping lose their medals and their popularity. Steroids are tested for in every sport. In this case, extending the human capability is a cheat to the natural body.

However, as we progress in science for our quest of this future human, we risk blurring the lines of what’s natural and what’s not. What will we embed in our bodies? How will we alter our make-up, and how do we determine whether an athlete is clean or not?

These concerns are already here with the introduction of gene doping. Gene doping permits athletes to increase muscle mass and strength significantly through gene altercation. Originally made for people with injuries or disabilities, gene doping has surfaced illegally in competition already. The Olympic Committee has already invested in creating testing techniques to catch gene dopers, and speaks vehemently against its use.

I wonder how long we can regulate sport, as we already have plenty of cases of cheaters that slip through the cracks. What will the future Olympian look like? Maybe we throw in the towel in terms of testing and call it a free for all — use whatever you’d like to win, super human vs. super human. May the best man-machine-technology win.

Blog Post #3 – Signal Detection Theory and Explosive Sniffing Dogs

Signal detection theory exemplifies not only how accurate we are at picking up on a stimuli, but also how well we excel at doing so given the individual circumstances. What signal detection theory describes is how each situation can be unique – in terms of severity in misses and false alarms as well as in levels of capability in actually picking up the appropriate signal.

Humans aren’t perfect at reading in our environments. We’re far from it. So when thinking of how automation can extend human capabilities to read in signals, I thought of how we have turned to other alternatives outside of machines to enhance our detection of stimuli. Particularly, my mind went to our use of explosive sniffing dogs.

Dogs make for the perfect candidate given such a task. They are loyal and trainable, and have the smelling hypersensitivity that we lack. We can, and have, catered the training of dogs to contribute both domestically and internationally to our protection in the most stressful of situations.

So do we need automation in this domain when we have dogs? It’s certainly worth investigating. In the last couple weeks, multiple news outlets have reported the neglect and mistreatment of these dogs overseas in Jordanian kennels.Shocking images reveal these highly trained and skilled dogs starving of death and heat stroke, and living in shockingly destitute conditions. Neglect is abundant, as the U.S. continues to ship more of these dogs overseas.

While dogs may seem like a strong alternative to humans to sniff explosives, I argue automation should be our next frontier in this domain. If we aren’t willing to take care of our dogs, we have no business using them. Can we mimic the hunting instincts and smelling abilities in machines? I hope so, it’s at the very least worth exploring.

Blog Post #2 – Cultural Differences & Task Analysis

In undergrad, I was able to take a course on childhood development across cultures. What I learned from the course and the literature we read was how simultaneously cultures could be so vastly different and the same at doing the same things. A large take-away was how, while we tend to think of how we personally perform tasks as the best, this is often not the case. Cultures may differ in how they perform similar actions, as simple as driving or eating, or as complex as how we operate our governments and educational systems. However, they all serve a general purpose to enhance our quality of life.

Grossman et. al. studies explored cognitive differences among eastern and western cultures and the various factors that could lead to these differences, such as genetics and linguistics as well as societal structure. I think it would be interesting to explore task analyses of every day routine, like driving and eating, across cultures. Maybe we could learn ways to enhance the methodologies we have so deeply embedded into our own society into more dynamic and efficient ones.

Blog Post #1 – Human-Machine Systems and Automation

Hello and Welcome to ENP162 website!

Hello! Thank you for visiting my portfolio site for ENP162. My name is Fallon Shaughnessy, and I am a human factors engineering master’s student at Tufts University. The purpose of my blog is to share my thoughts and insights on all things human factors and automation. And I would love to hear from you! Don’t hesitate to leave comments and replies via the space provided at the end of each blog post. My hope is that this page becomes a place for lively discussion!

Human-Machine Systems and Automation

We live in an increasingly technology-driven world, which is clearly highlighted through the evolution of automated systems and how we interact with them. With rapid strides in our advancement of automated technology, I also become increasingly curious about the ethics surrounding these systems. Are they dangerous? Could they do more harm than good? What constitutes harm?

I always think about a childhood favorite book of mine, Charlie and the Chocolate Factory. Poor Mr. Bucket was replaced at the toothpaste factory because of these very machines. Because of his unemployment, Charlie and his family faced financial trouble and a general worsening of well being both physically and psychologically. Daniel Akst for The Wilson Quarterly wrote of these risks and responsibilities we have when implementing automation into industry. When we think of harm, we often think of physical injury or death. We all have heard about how self driving cars can, and have, caused physically harmful accidents. But what other types of harm are we subjecting humans to through automation? And how can we be cognizant of these risks? Who benefits from automated systems and who doesn’t?