Category: Uncategorized (page 1 of 3)

Blog 10: Reflection

Thoughts on the Class

I most enjoyed the IoT lectures because I see this technology growing very rapidly in the future. 

I would have liked to go into more detail in all of the topics covered; however, that wouldn’t be possible without adjusting the time spent on other interesting topics. I think that if I had to spend less time on one subject area, then it would have been the GPS/GIS section. I found the GPS and GIS to be very interesting; however, I felt like it is such a vast subject area that I did not learn a whole lot about GPS and GIS due to time constraints. 

I also think that the big final project should be re-adjusted. Due to the nature of the Gigglebots, it was not possible to have four people work on the coding element of the project, so in most teams only one person was responsible for the main part of the project, which was the programming of the Gigglebots to complete challenge 1 and 2.  

What topics do you think should be added to future years?  

Besides the GPS/GIS removal, I don’t think that there should be any additional topics added or removed. I think that the course covers so many relevant topics already that it would be interesting to go into farther depth with the existing topics.  Lastly, I think that an assignment that researches current market offerings of the technology we learn about in class would be very interesting. For example, I liked the blog articles because I usually wrote about the real world applications of the technologies that we covered in class. My favorite blog post that I wrote about was AmazonGo–a high tech, cashier-less store. In conclusion, I really enjoyed ENP 162 and feel like I learned a whole lot.

Final Project

Tyler Hassenpflug, Tim Holt, Alec Portelli, and Blake Williams

Mental Mode & API

The basic architecture of the code for the project includes 5 different modules, 3 for controllers and 2 for Gigglebot interpretation. The code includes a module for a master swarm controller, a module for individual control in a swarm setting, a module for control in an individual control setting, a module for Gigglebot interpretation of swarm setting commands, and a module for Gigglebot interpretation of individual setting commands. The model we used for switching controlling parties in the individual control setting was communication based. Once both bots are in the appropriate location, the separate parties gain each other’s attention and initiate a switch of control via use of the microbit’s buttons. In the swarm setting, there is no need for communication to switch control because both the master and the individual have complete control over the bot, thus all that is necessary is a mutual understanding between the master controller and the individuals that they will both initiate actions that are mutually beneficial.

Task Analysis

To view the task analysis, please download the file below.

Software Discussion

The basic architecture of the code includes 5 modules: pass setting controller, swarm setting master controller, swarm setting individual controller, gigglebot swarm setting, gigglebot pass setting. In designing for the pass setting, we utilized the built-in gigglebot controller module for controlling the bots. While this module led to rather high sensitivity that rendered the bots difficult to control initially, once the controller microbit was housed in our ergonomic controller, the microbit became significantly easier to control. For passing control, our pod decided to designate buttons on the microbits for changing the controller group of the bot using a radio signal. We initially tried to make it so each controller initiated a particular change of control for both bots in the pod but ran into problems with both bots being able to pick up the signal from a single controller. We then switched to having each controller switch the bot being controlled to the other groups controller which required slightly more communication.

For the swarm setting, we only used radio signals to give commands to the bots so that all bots could receive signal from a single controller (the master). In this setting we essentially mapped each direction that the bot could move to a tilt of the controller microbit in that direction. To solve the problem of having multiple controllers control the same gigglebot, we designated a 10 value range for each controller to give commands through. The master controller gives commands from 20-29, and the individuals use the 30s and 40s. Our shared control between the master and individuals basically relies on communication and trust between the two parties. If the individual notices some error in their bots path, they can send the commands that will fix the bot, overriding what commands the master is giving, and then stop giving commands thus returning control back to the master.

Pass setting controller

Pass setting Gigglebot 

Swarm setting gigglebot

Swarm setting individual controller

The code for swarm master controller unavailable as it is on the other team in the pod’s machine.

UI/UX Controller



User Walk Through

Meet Lucy

Lucy is a 21 year old student at Tufts University studying Economics.  Lucy knows very little about technology other than being able to sync her apple watch, iphone and macbook.  During finals week, she accompanies her friend Tim to the Nolop lab on the first floor of Tufts new Science, Engineering and Technology building.  The Nolop lab is a new makerspace open to everyone at Tufts campus. Lucy has been listening to Tim ramble on about his project for the past two weeks and how he has been programming MicroBit chips for something called a GiggleBot.  She wanted to see what all the fuss was about so she decided to join him in Nolop while he finishes up his final project.  

When Lucy first walks into the Nolop lab she notices several little green robots, controllers and and rooms with grids spread out on the floor.  She picks up the controller next to Tim’s Gigglebot and turns on both the GiggleBot and the controller. She tilts the controller forward, instructing the GiggleBot to rotate its wheels in forward orientation relative to the bot. She continues to tilt the controller forward until she drives directly into the wall.  She then tips the controller backwards, instructing the bot to rotate wheels in a backwards orientation relative to the bot. She continues to tilt the controller backwards until it rams directly into Tims ankle. She then repeats this process by tilting the controller to the right and then to the left. 

Tim then picks up the controller for another bot in the same pod and turns on both the bot and the controller. Tim looks at Lucy and indicates that they are going to switch controls of the bots. Both Tim and Lucy pressed the A button to gain control of each others bots.  They each look at their new bots and determine if the pot needs to move frontward, backward, to the left or to the right. 

Two other students from Tim’s Human Machine System Design course pick up controllers for the other bots in the same pod.  They each turn on their respective controllers and bots and begin driving them around the room. Now that all bots are turned on, the master controller, in this case being Lucy, determines she wants to gain control of all bots.  Lucy observes the location of all the bots which direction she would like them to move. All at once, Lucy takes control over over all of the bots in the pod, moving them around the room in a group. 

 Video Overview

Reflections and Future Directions

This assignment brought forth some of the human factors challenges that come with swarm system design. The coordination that was needed during the design process between the coders was vital, and then getting everyone on their respective teams to understand the system proved to be key as well. Creating a universal system that is easy for everyone to use and that can allow more than one person to enter the path of control proved to be a very difficult task. There are some arbitrary concepts when it comes to control. For example “50% control” is arbitrary when we do not necessarily know what constitutes 50%. Designing the system forced us to define a lot of these unknowns. This project also helped us learn that human factors design isn’t necessarily always tactile. When figuring out exactly the system was going to work, everything was hypothetical until it was coded and tested. Visualization of a system is hard to do, and this project tested our abilities to take a task-analysis and transform it into an execution, rather than an interface or product. It definitely helped with the psychological side of the human factors process. In terms of designing the controller, using anthropometrics was the main challenge in finding a design that could fit the intended population. It would have been very easy to put the chip in a simple shape, but instead we took the time to find an intended audience, in this case 5%-95% of the male population, take the measurements, and create an adaptive ergonomic design that all members of our team could easily use. This was a great opportunity to take a real life case and design for someone that we could see using the product and make changes accordingly. Overall, this project tested on both sides of human factors: physical and system design that puts the user first. 

Future directions of a project like this would be to increase the amount of robots that are being controlled, the amount of users, and being able to vary the amount of control a user or master user has. Creating flexibility for a user to determine how much control over the swarm is a very complicated task, and it again comes down to defining how much control 50% or 90% means. Especially with swarms that get great in size, passing control gets very complicated. Designing a system that seamlessly integrates how each user wants simultaneously would be critical to making it successful. Redesigning the interface would be very important as well. Instead of using two buttons to receive and give control, there could be some sort of interface that lets the user highlight what part of the swarm he/she would like to control, and thus there isn’t any sort of confusion or mixup when passing control. Having two buttons may create mistakes with who’s getting the control or not. In terms of the controller, creating a more ergonomic design with exterior materials that are more comfortable to use, rather than sheets of plastic, along with buttons that are easier to press. A new kind of interface, depending on the system, would help as well, such as a touchscreen with an ergonomic handle. Such a controller would be easy to figure out which user is controlling what, and thus making it simple to pass control within a big swarm. 

Blog 9: Automation & Chatbots in Financial Services

Businesses are increasingly beginning to use chatbots for users seeking customer service. Chatbots use AI in order to help answer users’ questions. Businesses employ chatbots because they lower the cost required for customer service representatives and allow users to get instant feedback. Many financial firms have begun to adopt chatbots, which I believe is an exciting application and presents many interesting questions regarding users’ perceptions and sense of privacy.

Most banks view bots as an opportunity (1).

Most firms are investing in bots (1).
Many banks believe that bots have the potential to take over conversations usually handled by customer service employees. (1)
Bots provide instant responses (1).
Many banks need help building bot technology (1).

78% of retail bank customers seek guidance with their banking; however, only 45% of customers felt like the digital experience that they received met their need (2). Chatbots represent a mechanism to help improve customers’ digital experience and close gaps in consumer knowledge.

All of the major banks have began to release their own chatbots. For example, Bank of America released Erica in order to “to send notifications to customers, provide balance information, suggest how to save money, provide credit report updates, pay bills and help customers with simple transactions (3).”

Bank of America’s chatbot, Erica. (4)

The rise of automation allows for technology to perform tasks that businesses would employ employees to do. According to PwC, in the early 2030’s, around 38% of jobs in the United States could be automated (5). Interestingly, the likelihood of one’s job getting automated is highly dependent on the level of education that the job requires. For example, for people working in the UK, it’s estimated that 46% of jobs that only require a high school degree will be automated, while jobs that require an undergraduate degree will be replaced by automation by 12%.

The US has the highest risk of jobs at risk of automation. (5)

These statistics point to the fact that the financial services industry will continue to develop technology, like chatbots, that automate tasks and, ideally, also improve the customer’s experience. Currently, chatbots can function to direct consumers to educational resources and perform simple tasks; however, the chatbots in the financial industry do not have the ability to perform more complex tasks.

Sources

  1. https://thefinancialbrand.com/63596/financial-banking-bots-chatbot-voice-ai/
  2. jdpower.com/business/press-releases/2018-us-retail-banking-advice-study
  3. https://thefinancialbrand.com/71251/chatbots-banking-trends-ai-cx/
  4. https://zdnet2.cbsistatic.com/hub/i/r/2018/05/18/cc061047-1988-404c-9a15-802d318c0c2a/thumbnail/570×322/f76d8352246ef4b5c16fa2c333618e12/5aff292560b27e5ce5351f3e-1280x7201may182018201323poster.jpghttps://www.pwc.co.uk/economic-services/ukeo/pwcukeo-section-4-automation-march-2017-v2.pdf
  5. https://www.pwc.co.uk/economic-services/ukeo/pwcukeo-section-4-automation-march-2017-v2.pdf

Blog 8: IoT

The “Internet of Things” (IoT) refers to the ability of connected devices to transmit data between one another. Effectively, this allows devices to communicate with one another and automate tasks. For example, say your digital calendar notices that you are running late to lunch with a friend through information obtained from your car’s location; your phone could then use this information–from the calendar and location of your car–to send a text message to your friend telling them that you’re running late. IoT is expected to grow at an increasingly high rate in the upcoming years: it is expected to grow at an annual rate of 28.7% between 2020 to 2025 (1).

Future IoT applications in a “smart” city (2)

IoT applications in the home have been popular among technology companies, such as Amazon. Here is a summary of current smart home devices on the market and their uses:

Echo Show with Phillips Hue Light Bulb

The echo show can play music, TV shows, movies, and video call. The device also is able to use IoT to turn on specific lights per the user’s request.

Amazon Smart Plug

The smart plug works with Amazon’s Alexa in order to turn off any device that uses an outlet. The user uses voice to turn an appliance, light, or device on or off. The user can also schedule or remotely turn devices on or off.

Amazon Alexa

Alexa acts as the “brain” of the smart home. Alexa is able to connect to all of the following shown below.

Going forward IoT’s capabilities will be increased by future innovations in big data and machine learning. There have been some concerns with IoT due to its security. For example, some have concerns that Alexa “has been eavesdropping on users conversations.” As technology companies continue to develop IoT devices, they will have to have processes or capabilities that allow users to indicate their preferences for privacy. Personally, I’m very interested in where IoT will go and how the smart-home will continue to evolve.

Sources:

  1. https://thriveglobal.com/stories/the-future-of-iot-4-predictions-about-the-internet-of-things/#:~:targetText=In%20a%20span%20of%20ten,devices%20on%20a%20single%20cell.
  2. https://www.forbes.com/sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-understand/#7e2c082b1d09
  3. Images obtained from Amazon.com

Blog 7: GPS and GIS

This week in ENP 162 Human-Machine System Design, we worked with the data lab at Tufts in order to learn more about GPS and GIS. The GPS, global positioning system, was originally developed by the US military; however, GPS has had profound impacts on technology industries. The GIS, geographic information system, is a way to visualize, interpret, and analyze data provided by GPS’s.

Image result for GIS systems
An example of a GIS system (1).

There are around 30 satellites constantly orbiting the earth at around 20,000 km (2). At any point on the Earth, there are at least 4 satellites that are able to read your position. Satellites, receivers, and ground stations all work together in order to express a location through longitudinal and latitudinal ordinates (3).

Image result for gps stand for
An overview of how GPS works (4).

GPS and GIS have a myriad of applications. Here is a list of everyday applications of when you may use GPS and GIS:

  • Google Maps
  • Uber Eats
  • Facebook “check-in”
  • Yelp
  • Snapchat “filters”
  • Google search engine: “find bike stores near me”

GPS and GIS also have a lot of implications in academia and research. Here are some examples of important questions researchers examine with this data:

  • Migratory patterns of endangered animals
  • Displacement of refugees
  • Spread of anti-biotic diseases
  • Oil spills
  • Spread of invasive species

The future of GIS and GPS have a dynamic future with serious potential consequences. With the farther development of big data and increased ability to track and locate users and individuals, in the wrong hands, this type of technology could raise serious ethical and societal-related issues. Although this technology has the potential to cause harm, it also has an incredible ability to help humans, environment, and animals. If epidemiologist were able to faster identify the spread of harmful diseases, then many human lives could be saved. Or, if researchers were better able to identify migratory patterns of endangered species, then mechanisms could be enabled to limit human’s impacts on these species.

Image result for gis data epidemiology
GIS of health-related issues in the USA (5).

In conclusion, like most other technological aspects in this course, GPS and GIS capabilities are going to continue to improve, and we must consider how these technologies will impact society–both positively and negatively.

Sources

  1. https://upload.wikimedia.org/wikipedia/en/1/15/IDRISI_GIS_Seasonal_Trends.jpg
  2. http://www.physics.org/article-questions.asp?id=55#:~:text=The%20Global%20Positioning%20System%20(GPS,an%20altitude%20of%2020%2C000%20km.&text=These%20signals%2C%20travelling%20at%20the,for%20the%20messages%20to%20arrive.
  3. https://spaceplace.nasa.gov/gps/en/
  4. https://www.slideshare.net/bahamut2/how-gps-works-4693448
  5. https://upload.wikimedia.org/wikipedia/commons/b/b6/UpdatedHeartDiseaseMap.jpg

Blog 6: Social Robots

A social robot can be defined as “an artificial intelligence (AI) system that is designed to interact with humans and other robots” (1).

Social robots have the potential to completely disrupt the customer experience across all industries. For example, retail stores may not even have a need in the future to staff their stores with people; instead, social robots and other forms of technology could be responsible for tasks like checking customers out, staffing customer service departments, and stocking shelves.

In 2018, Amazon opened their first cashier-less store named “Amazon Go” (3). Customers scan their Amazon app to enter the store and then are free to shop around. Customers only need to pick items off the shelf and then are free to leave the store with their items. The technology in the store is able to pick up what items customers grab and then electronically bills customers through their Amazon profile.

Overview of Amazon Go (2).

Although the stores are highly automated, there are still employees in the Amazon Go stores. Employees are needed in the store in order to stock shelves and assist customers. In the future, I believe that Amazon will look to use more advanced automated processes and social robots in order to eventually create a store that is completely employee-less.

In addition, Amazon currently employs a policy that is not sustainable on a wide-scale. If customers are incorrectly charged or not satisfied with their purchase, then customers are able to get refunded without any farther questions. This system hypothetically would allow customers to take advantage of the system very easily. Social robots are one potential solution to ensure that customers are not able to take advantage of the system.

Amazon Go is finally opening to the public in Seattle, Washington on Monday.
Amazon Go store in Seattle, Washington (3).

There are many unclear tasks that these social robotics will have to be able to perform. Customer service representatives often interact with customers that have highly unique problems. Because of this, it would not be possible for a programmer to instruct the robot on how to solve every possible problem that a customer may have. This is one of the current limitations of social robots as compared to humans: they are unable to handle more complex, situational responsibilities.

Customers swipe their Amazon app to enter the store and the company's "Just Walk Out" technology takes care of the rest.
Customers scan their phone with their Amazon app open in order to enter the school (3).

Although they may have some limitations, social robots offer many potential benefits to companies. For example, social robots allow companies to spend less money on employing their workforce and better access to customer issues. Hypothetically, social robots would be able to keep data on the types of interactions they have with customers. This data could be reported to companies in order to give key insights about customer pain-points. The access to this data would allow companies to solve common customer issues at a faster rate.

Sources:

  1. https://searchenterpriseai.techtarget.com/definition/social-robot
  2. https://www.youtube.com/watch?time_continue=40&v=NrmMk1Myrxc
  3. https://www.nydailynews.com/news/national/amazon-cashier-less-grocery-store-finally-open-article-1.3771675

Blog 5: Machine Learning

Machine learning refers to computer programs in which the programs have the ability to learn. Computers are able to do this through analytical and statistical components written in the program. In supervised machine learning, a program is given a set of training examples through data and the program is then able to give conclusions with new data. In unsupervised machine learning, a program is not given the set of training examples and must draw conclusions on its own.

The following is an overview of applications of machine learning. 

1. Facial recognition and imaging software are being developed in order to help physicians in identifying diseases like cancer. 

2. Machine learning can detect fraud better than humans because they are able to better process large amounts of information. Machine learning is able to understand users purchasing habits and detect anomalies that may be fraud.  

3. Recommendation engines are ubiquitous in consumer products. For example, Netflix recommends to users new shows, UberEats recommends new restaurants to try, and Amazon suggests products that their users may be interested in. 

4. Self-driving cars use machine learning to improve the safety of its driving. 

5. Social media analyzes a given user’s activity to generate content for them. For example, Instagram has a discover page that’s created based upon a user’s prior activity. Facebook analyzes users’ prior activity in order to generate advertisements that are targeted toward specific users.

Looking forward, machine learning’s capabilities are only going to expand. Companies are going to be able to segment their market more finely, which will result in advertisements and products that are more specifically geared towards specific groups. This will result in a user experience with greater personalization. Also, machine learning will be able to use big data in order to better predict outcomes. This could have applications with forecasting stock prices, the weather, or political outcomes.

Sources 

https://www.toptal.com/machine-learning/machine-learning-theory-an-introductory-primer

https://builtin.com/artificial-intelligence/machine-learning-examples-applications

Blog 4: Neural Networks

Neural networks are integral to the development of artificial intelligence and machine learning. Neural networks in machines closely resemble how humans process information. In humans, there are millions of neurons that send electrical impulses and communicate at synapses. 

Source

Neural networks are modeled from human neurons. In computer science, neural networks interpret data by categorizing data. Neural networks are composed of nodes. Nodes are instances where processing occurs, similar to the role of synapses in humans. A node uses sensory input and data that assigns a weight to each node in order to interpret a given input. All of the nodes are then summed and put into the systems activation function to determine if and to what extent a piece of sensory output should influence the final output (similar to how humans either sense a stimuli or do not sense the stimuli) to determine if and to what extent a piece of sensory output should influence the final output.

Neural networks are able to perform more advanced functions when the number of node layers increases. Normally, these neural networks are written in a feature hierarchy. A feature hierarchy increases the level of complexity of abstraction as the data goes deeper into the neural network. As pictured below, in facial recognition, the neural network may begin with individual pixels in the first layer and end with human faces by the third layer. 

Neural networks can also be applied to big data. This application can particularly be useful as scientists grapple with how to develop systems to interpret the mass amounts of data now available to us. For example, neural networks could be applied to forecasting stock prices, predicting disease outbreaks, and identifying criminals through face detection. Although neural networks have very useful applications, there are some broader implications that we must consider. For example, what if facial recognition misidentifies certain races as criminals at a higher rate; this would lead to a greater amount of discrimination. In fact, this is a common concern of facial recognition software being implemented (read more here).  

Source:

https://skymind.ai/wiki/neural-network#targetText=Neural%20networks%20are%20a%20set,labeling%20or%20clustering%20raw%20input.

Blog 3: Signal Detection & Information Theory

Signal detection theory measures the ability for a user to differentiate a specific signal from noise. For example, an air traffic controller needs to be able to differentiate an airplane (specific signal) from large clouds (an example of a potential noise). 

The differences between automation and human inclusion in system design. Source.

Automation has the potential to be incorporated into signal detection theory. If there is an environmental hazard, then a given system has a mechanism to pick up this hazard (usually via sensors). Next, these sensors can either go straight toward automation or to the human by alerting them of the environmental hazard. For example, in an AC system, a thermostat will sense a temperature change from within the set temperature and then will signal to the system that it needs to work in order to increase or decrease the temperature. If this system did not work via automation, then the AC system would have to alert the user of the temperature change via its display and then require the user to take action in order to activate the AC system. This would be very cumbersome if AC systems worked in this manner. 

False positives and false negatives pose danger when they occur. Source.

In addition, there are four different outcomes from specifying a signal versus noise. The user can correctly identify a signal that is present, the user can identify a lack of a signal that is present, the user can identify a signal that is not there (false positive), and the user cannot identify a signal that is there (false negative). Let’s think about this within the scope of the medical field. If there were a doctor performing an ultra-sound looking for free fluid in the abdomen, then the doctor could: 

– correctly identify free fluid in the abdomen 

– correctly identify a lack of free-fluid in the abdomen 

– identify free-fluid in the abdomen that is not there (false positive) 

– fail to identify free-fluid in the abdomen in the abdomen (false negative) 

With regards to which outcome is the most dangerous, the false negative has the potential to cause serious harm. This is because the false negative leads to the doctor missing a potential catastrophic diagnosis. 

Signal detection theory is an important aspect when designing systems because it ensures that systems are designed with the idea that humans are imperfect. There are a range of factors that can affect a user’s ability to detect a signal—like level of fatigue, physical abilities, and environment—so by designing systems that accurately alert users, we are able to increase the safety of these systems. 

Blog Post 2: Task Analysis

Task analysis is often used by human factors specialists in order analyze how a user achieves an outcome. For example, in a task analysis on making a peanut butter and jelly sandwich, the human factors specialists would note the steps and tools used to make the sandwich. Task analysis is usually outlined in terms of the sequence of steps.

In hierarchical task analysis, an overall task is broken down into steps and then into sub-steps. Task analysis is a great tool that can be used to determine which processes in a system are able to be automated. In the example of the spaghetti, a researcher may identify that “emptying pasta box into pot” may be able to be automated. 

HTA of making spaghetti from ENP 162 lecture

Another type of task analysis is cognitive task analysis. In cognitive task analysis, researchers analyze the cognitive ways people complete tasks. There are over 100 ways to complete tasks analysis; however, these methods follow the following principles. 

“1) Collect preliminary knowledge 

2) Identify knowledge representations 

3) Apply focused knowledge elicitation methods

4) Analyze and verify data

5) Format results for intended application.” 

Cognitive task analysis is often used to define the necessary amount of knowledge when completing a task. This can be highly important in reducing human error in professional place settings; for example, a cognitive task analysis could be performed in order to determine the process of landing an aircraft.

Cognitive task analysis is a great way to get a detailed representation of how a task is completed. In addition, cognitive task analysis can be a tool used in identifying automation capabilities by illustrating the level of cognitive complexity a step requires. Unfortunately, cognitive task analysis is an intensive task that is resource-intensive. In addition, cognitive task analysis does not identify the non-cognitive requirements that are needed to complete a task. 

An alternative way to present a task analysis. Source.

In conclusion, task analysis can be used in academia and industry in order to better understand users and sources of potential conflict. With regards to automation, task analysis clearly outlines concrete steps in a program, which can then be used to identify potential steps that can be automated. In the end, task analysis is a highly beneficial framework that can reveal more about users and inform design decisions. 

References:

https://www.usabilitybok.org/cognitive-task-analysis

http://www.cogtech.usc.edu/publications/clark_etal_cognitive_task_analysis_chapter.pdf

« Older posts

© 2020 ENP 162

Theme by Anders NorenUp ↑