Blog 10: Reflection

Thoughts on the Class

I most enjoyed the IoT lectures because I see this technology growing very rapidly in the future. 

I would have liked to go into more detail in all of the topics covered; however, that wouldn’t be possible without adjusting the time spent on other interesting topics. I think that if I had to spend less time on one subject area, then it would have been the GPS/GIS section. I found the GPS and GIS to be very interesting; however, I felt like it is such a vast subject area that I did not learn a whole lot about GPS and GIS due to time constraints. 

I also think that the big final project should be re-adjusted. Due to the nature of the Gigglebots, it was not possible to have four people work on the coding element of the project, so in most teams only one person was responsible for the main part of the project, which was the programming of the Gigglebots to complete challenge 1 and 2.  

What topics do you think should be added to future years?  

Besides the GPS/GIS removal, I don’t think that there should be any additional topics added or removed. I think that the course covers so many relevant topics already that it would be interesting to go into farther depth with the existing topics.  Lastly, I think that an assignment that researches current market offerings of the technology we learn about in class would be very interesting. For example, I liked the blog articles because I usually wrote about the real world applications of the technologies that we covered in class. My favorite blog post that I wrote about was AmazonGo–a high tech, cashier-less store. In conclusion, I really enjoyed ENP 162 and feel like I learned a whole lot.

Final Project

Tyler Hassenpflug, Tim Holt, Alec Portelli, and Blake Williams

Mental Mode & API

The basic architecture of the code for the project includes 5 different modules, 3 for controllers and 2 for Gigglebot interpretation. The code includes a module for a master swarm controller, a module for individual control in a swarm setting, a module for control in an individual control setting, a module for Gigglebot interpretation of swarm setting commands, and a module for Gigglebot interpretation of individual setting commands. The model we used for switching controlling parties in the individual control setting was communication based. Once both bots are in the appropriate location, the separate parties gain each other’s attention and initiate a switch of control via use of the microbit’s buttons. In the swarm setting, there is no need for communication to switch control because both the master and the individual have complete control over the bot, thus all that is necessary is a mutual understanding between the master controller and the individuals that they will both initiate actions that are mutually beneficial.

Task Analysis

To view the task analysis, please download the file below.

Software Discussion

The basic architecture of the code includes 5 modules: pass setting controller, swarm setting master controller, swarm setting individual controller, gigglebot swarm setting, gigglebot pass setting. In designing for the pass setting, we utilized the built-in gigglebot controller module for controlling the bots. While this module led to rather high sensitivity that rendered the bots difficult to control initially, once the controller microbit was housed in our ergonomic controller, the microbit became significantly easier to control. For passing control, our pod decided to designate buttons on the microbits for changing the controller group of the bot using a radio signal. We initially tried to make it so each controller initiated a particular change of control for both bots in the pod but ran into problems with both bots being able to pick up the signal from a single controller. We then switched to having each controller switch the bot being controlled to the other groups controller which required slightly more communication.

For the swarm setting, we only used radio signals to give commands to the bots so that all bots could receive signal from a single controller (the master). In this setting we essentially mapped each direction that the bot could move to a tilt of the controller microbit in that direction. To solve the problem of having multiple controllers control the same gigglebot, we designated a 10 value range for each controller to give commands through. The master controller gives commands from 20-29, and the individuals use the 30s and 40s. Our shared control between the master and individuals basically relies on communication and trust between the two parties. If the individual notices some error in their bots path, they can send the commands that will fix the bot, overriding what commands the master is giving, and then stop giving commands thus returning control back to the master.

Pass setting controller

Pass setting Gigglebot 

Swarm setting gigglebot

Swarm setting individual controller

The code for swarm master controller unavailable as it is on the other team in the pod’s machine.

UI/UX Controller

User Walk Through

Meet Lucy

Lucy is a 21 year old student at Tufts University studying Economics.  Lucy knows very little about technology other than being able to sync her apple watch, iphone and macbook.  During finals week, she accompanies her friend Tim to the Nolop lab on the first floor of Tufts new Science, Engineering and Technology building.  The Nolop lab is a new makerspace open to everyone at Tufts campus. Lucy has been listening to Tim ramble on about his project for the past two weeks and how he has been programming MicroBit chips for something called a GiggleBot.  She wanted to see what all the fuss was about so she decided to join him in Nolop while he finishes up his final project.  

When Lucy first walks into the Nolop lab she notices several little green robots, controllers and and rooms with grids spread out on the floor.  She picks up the controller next to Tim’s Gigglebot and turns on both the GiggleBot and the controller. She tilts the controller forward, instructing the GiggleBot to rotate its wheels in forward orientation relative to the bot. She continues to tilt the controller forward until she drives directly into the wall.  She then tips the controller backwards, instructing the bot to rotate wheels in a backwards orientation relative to the bot. She continues to tilt the controller backwards until it rams directly into Tims ankle. She then repeats this process by tilting the controller to the right and then to the left. 

Tim then picks up the controller for another bot in the same pod and turns on both the bot and the controller. Tim looks at Lucy and indicates that they are going to switch controls of the bots. Both Tim and Lucy pressed the A button to gain control of each others bots.  They each look at their new bots and determine if the pot needs to move frontward, backward, to the left or to the right. 

Two other students from Tim’s Human Machine System Design course pick up controllers for the other bots in the same pod.  They each turn on their respective controllers and bots and begin driving them around the room. Now that all bots are turned on, the master controller, in this case being Lucy, determines she wants to gain control of all bots.  Lucy observes the location of all the bots which direction she would like them to move. All at once, Lucy takes control over over all of the bots in the pod, moving them around the room in a group. 

 Video Overview

Reflections and Future Directions

This assignment brought forth some of the human factors challenges that come with swarm system design. The coordination that was needed during the design process between the coders was vital, and then getting everyone on their respective teams to understand the system proved to be key as well. Creating a universal system that is easy for everyone to use and that can allow more than one person to enter the path of control proved to be a very difficult task. There are some arbitrary concepts when it comes to control. For example “50% control” is arbitrary when we do not necessarily know what constitutes 50%. Designing the system forced us to define a lot of these unknowns. This project also helped us learn that human factors design isn’t necessarily always tactile. When figuring out exactly the system was going to work, everything was hypothetical until it was coded and tested. Visualization of a system is hard to do, and this project tested our abilities to take a task-analysis and transform it into an execution, rather than an interface or product. It definitely helped with the psychological side of the human factors process. In terms of designing the controller, using anthropometrics was the main challenge in finding a design that could fit the intended population. It would have been very easy to put the chip in a simple shape, but instead we took the time to find an intended audience, in this case 5%-95% of the male population, take the measurements, and create an adaptive ergonomic design that all members of our team could easily use. This was a great opportunity to take a real life case and design for someone that we could see using the product and make changes accordingly. Overall, this project tested on both sides of human factors: physical and system design that puts the user first. 

Future directions of a project like this would be to increase the amount of robots that are being controlled, the amount of users, and being able to vary the amount of control a user or master user has. Creating flexibility for a user to determine how much control over the swarm is a very complicated task, and it again comes down to defining how much control 50% or 90% means. Especially with swarms that get great in size, passing control gets very complicated. Designing a system that seamlessly integrates how each user wants simultaneously would be critical to making it successful. Redesigning the interface would be very important as well. Instead of using two buttons to receive and give control, there could be some sort of interface that lets the user highlight what part of the swarm he/she would like to control, and thus there isn’t any sort of confusion or mixup when passing control. Having two buttons may create mistakes with who’s getting the control or not. In terms of the controller, creating a more ergonomic design with exterior materials that are more comfortable to use, rather than sheets of plastic, along with buttons that are easier to press. A new kind of interface, depending on the system, would help as well, such as a touchscreen with an ergonomic handle. Such a controller would be easy to figure out which user is controlling what, and thus making it simple to pass control within a big swarm. 

Blog 9: Automation & Chatbots in Financial Services

Businesses are increasingly beginning to use chatbots for users seeking customer service. Chatbots use AI in order to help answer users’ questions. Businesses employ chatbots because they lower the cost required for customer service representatives and allow users to get instant feedback. Many financial firms have begun to adopt chatbots, which I believe is an exciting application and presents many interesting questions regarding users’ perceptions and sense of privacy.

Most banks view bots as an opportunity (1).

Most firms are investing in bots (1).
Many banks believe that bots have the potential to take over conversations usually handled by customer service employees. (1)
Bots provide instant responses (1).
Many banks need help building bot technology (1).

78% of retail bank customers seek guidance with their banking; however, only 45% of customers felt like the digital experience that they received met their need (2). Chatbots represent a mechanism to help improve customers’ digital experience and close gaps in consumer knowledge.

All of the major banks have began to release their own chatbots. For example, Bank of America released Erica in order to “to send notifications to customers, provide balance information, suggest how to save money, provide credit report updates, pay bills and help customers with simple transactions (3).”

Bank of America’s chatbot, Erica. (4)

The rise of automation allows for technology to perform tasks that businesses would employ employees to do. According to PwC, in the early 2030’s, around 38% of jobs in the United States could be automated (5). Interestingly, the likelihood of one’s job getting automated is highly dependent on the level of education that the job requires. For example, for people working in the UK, it’s estimated that 46% of jobs that only require a high school degree will be automated, while jobs that require an undergraduate degree will be replaced by automation by 12%.

The US has the highest risk of jobs at risk of automation. (5)

These statistics point to the fact that the financial services industry will continue to develop technology, like chatbots, that automate tasks and, ideally, also improve the customer’s experience. Currently, chatbots can function to direct consumers to educational resources and perform simple tasks; however, the chatbots in the financial industry do not have the ability to perform more complex tasks.



Blog 8: IoT

The “Internet of Things” (IoT) refers to the ability of connected devices to transmit data between one another. Effectively, this allows devices to communicate with one another and automate tasks. For example, say your digital calendar notices that you are running late to lunch with a friend through information obtained from your car’s location; your phone could then use this information–from the calendar and location of your car–to send a text message to your friend telling them that you’re running late. IoT is expected to grow at an increasingly high rate in the upcoming years: it is expected to grow at an annual rate of 28.7% between 2020 to 2025 (1).

Future IoT applications in a “smart” city (2)

IoT applications in the home have been popular among technology companies, such as Amazon. Here is a summary of current smart home devices on the market and their uses:

Echo Show with Phillips Hue Light Bulb

The echo show can play music, TV shows, movies, and video call. The device also is able to use IoT to turn on specific lights per the user’s request.

Amazon Smart Plug

The smart plug works with Amazon’s Alexa in order to turn off any device that uses an outlet. The user uses voice to turn an appliance, light, or device on or off. The user can also schedule or remotely turn devices on or off.

Amazon Alexa

Alexa acts as the “brain” of the smart home. Alexa is able to connect to all of the following shown below.

Going forward IoT’s capabilities will be increased by future innovations in big data and machine learning. There have been some concerns with IoT due to its security. For example, some have concerns that Alexa “has been eavesdropping on users conversations.” As technology companies continue to develop IoT devices, they will have to have processes or capabilities that allow users to indicate their preferences for privacy. Personally, I’m very interested in where IoT will go and how the smart-home will continue to evolve.


  3. Images obtained from

Blog 7: GPS and GIS

This week in ENP 162 Human-Machine System Design, we worked with the data lab at Tufts in order to learn more about GPS and GIS. The GPS, global positioning system, was originally developed by the US military; however, GPS has had profound impacts on technology industries. The GIS, geographic information system, is a way to visualize, interpret, and analyze data provided by GPS’s.

Image result for GIS systems
An example of a GIS system (1).

There are around 30 satellites constantly orbiting the earth at around 20,000 km (2). At any point on the Earth, there are at least 4 satellites that are able to read your position. Satellites, receivers, and ground stations all work together in order to express a location through longitudinal and latitudinal ordinates (3).

Image result for gps stand for
An overview of how GPS works (4).

GPS and GIS have a myriad of applications. Here is a list of everyday applications of when you may use GPS and GIS:

  • Google Maps
  • Uber Eats
  • Facebook “check-in”
  • Yelp
  • Snapchat “filters”
  • Google search engine: “find bike stores near me”

GPS and GIS also have a lot of implications in academia and research. Here are some examples of important questions researchers examine with this data:

  • Migratory patterns of endangered animals
  • Displacement of refugees
  • Spread of anti-biotic diseases
  • Oil spills
  • Spread of invasive species

The future of GIS and GPS have a dynamic future with serious potential consequences. With the farther development of big data and increased ability to track and locate users and individuals, in the wrong hands, this type of technology could raise serious ethical and societal-related issues. Although this technology has the potential to cause harm, it also has an incredible ability to help humans, environment, and animals. If epidemiologist were able to faster identify the spread of harmful diseases, then many human lives could be saved. Or, if researchers were better able to identify migratory patterns of endangered species, then mechanisms could be enabled to limit human’s impacts on these species.

Image result for gis data epidemiology
GIS of health-related issues in the USA (5).

In conclusion, like most other technological aspects in this course, GPS and GIS capabilities are going to continue to improve, and we must consider how these technologies will impact society–both positively and negatively.



Blog 6: Social Robots

A social robot can be defined as “an artificial intelligence (AI) system that is designed to interact with humans and other robots” (1).

Social robots have the potential to completely disrupt the customer experience across all industries. For example, retail stores may not even have a need in the future to staff their stores with people; instead, social robots and other forms of technology could be responsible for tasks like checking customers out, staffing customer service departments, and stocking shelves.

In 2018, Amazon opened their first cashier-less store named “Amazon Go” (3). Customers scan their Amazon app to enter the store and then are free to shop around. Customers only need to pick items off the shelf and then are free to leave the store with their items. The technology in the store is able to pick up what items customers grab and then electronically bills customers through their Amazon profile.

Overview of Amazon Go (2).

Although the stores are highly automated, there are still employees in the Amazon Go stores. Employees are needed in the store in order to stock shelves and assist customers. In the future, I believe that Amazon will look to use more advanced automated processes and social robots in order to eventually create a store that is completely employee-less.

In addition, Amazon currently employs a policy that is not sustainable on a wide-scale. If customers are incorrectly charged or not satisfied with their purchase, then customers are able to get refunded without any farther questions. This system hypothetically would allow customers to take advantage of the system very easily. Social robots are one potential solution to ensure that customers are not able to take advantage of the system.

Amazon Go is finally opening to the public in Seattle, Washington on Monday.
Amazon Go store in Seattle, Washington (3).

There are many unclear tasks that these social robotics will have to be able to perform. Customer service representatives often interact with customers that have highly unique problems. Because of this, it would not be possible for a programmer to instruct the robot on how to solve every possible problem that a customer may have. This is one of the current limitations of social robots as compared to humans: they are unable to handle more complex, situational responsibilities.

Customers swipe their Amazon app to enter the store and the company's "Just Walk Out" technology takes care of the rest.
Customers scan their phone with their Amazon app open in order to enter the school (3).

Although they may have some limitations, social robots offer many potential benefits to companies. For example, social robots allow companies to spend less money on employing their workforce and better access to customer issues. Hypothetically, social robots would be able to keep data on the types of interactions they have with customers. This data could be reported to companies in order to give key insights about customer pain-points. The access to this data would allow companies to solve common customer issues at a faster rate.



Project 3: Future Humans


Approximately 68 percent of adults are overweight or obese (1). Being overweight and obese increases one’s risk for cardiovascular disease, stroke, certain cancers, and much more (2). The most effective way to lower the amount of obesity is through diet and exercise.

Trends in obesity and overweight individuals from 1960 to 2006.

The health industry has a lot of room to grow from innovation. In the past 45 years, gyms have not changed very much in their appearance. Usually, a gym will have a free weights area, a machine weight area, and a cardio area of machines like treadmills and ellipticals. There have not been truly innovative applications from technology to the gym experience.  We feel that the current gym experience is often inconvenient, confusing, and unmotivating.  

For our project, we are redesigning the gym experience. The new gym experience will be an in-home gym for added convenience. We will be focusing on the following aspects to augment the gym experience: 

Notes from the brainstorming session of Future Healthy You.


Because this is an in home-gym, maximizing storage of the gym itself is critical in allowing the user to have a multi-purpose space. Therefore, a compact in-home gym with creative storage solutions will afford users to partake in the gym experience that normally would not have the space for an in home gym. In addition, the in home gym set up and storage will be entirely automated so that the user does not need to waste time setting up the gym or storing the gym after a workout. 


There will be sensors on the gym equipment itself and nano-sensors for the humans. The gym equipment sensors will be able to communicate with the central computer program in order to help the program determine if proper form is being used. The nano-sensors will be prescribed by a doctor and ingested by drinking a tasteless liquid. The nano-sensors will be able to alert the program about the current physical state of the user. This data will be used to determine if the user needs to be motivated to work harder or if the exercise needs to be performed with less intensity. In addition, the nano-sensors will have technology that will enable injury prevention mechanisms. Lastly, these nano-sensors will be able to track the nutrition components of what people are eating. 

Machine Learning

Machine Learning will be an integral part to the gym. Machine learning will be applied to multiple scenarios. For example, machine learning will be used in order to act as a trainer for the user. The program will be able to recommend workouts, teach the user how to perform movements, analyze incorrect body movements, and learn how to best motivate the user. In addition, the machine will automatically set the resistance and number of reps to be completed, or speed for cardio activities, so that they user does not need to remember the amount of weight and number of sets that they performed last time. The nano-sensors will give data to the machine learning program that will inform Lastly, the program will be able to generate workouts that the user will be more likely to enjoy based on prior feedback.  


This new gym will be feasible in 75 years. We expect the nano-sensors to take the longest to be developed and approved, which is farther discussed in the “How We Get There” section. 

Because there is the potential for serious injury, the computer program that analyzes body movements must be very accurate and precise. In addition, the program needs to be able to understand how to best motivate users, which is a highly individualized matter that may not be easily predicted. As discussed prior, the nano-sensors will be the most complex component to develop. With an ingestible sensor, there will surely be challenges in gaining approval from government agencies and then getting approval from users. We would need to be able to convince users of the benefits that the sensors provide and show that these benefits outweigh any potential concerns over things like privacy. Lastly, our in-home gym experience will require the expertise of many professions such as computer programmers, mechanical engineers, electrical engineers, human factors engineers, physicians, and personal trainers. 


For our user profiles, we considered two “types” of people that the Future Healthy You would be marketed to. It is assumed that both user profiles are not gender-specific. One group was young professionals in their twenties with a stable income and regular work schedule.  These people were assumed to already regularly workout and would be more interested in the efficiency and convenience aspects of the Future Healthy You device.  These people would not be significantly concerned about the cost of the device and would not be bothered by its technologically complex nature.

The other type of person the Future Healthy You would be marketed to is a person that has little experience independently exercising and is looking to be more in shape.  Though this can apply to all ages, the user profile shown below (for Charles Smith) depicts an older person with less technology experience.   This is because our device will be marketed specifically to be usable by people with less technology experience.  Though this will be in the future, and so the computer side and machine learning aspects may be less intimidating, the nanotechnology feature may be intimidating to older users (since we believe that the nanotechnology would be a newer technology).  By getting the support of various medical professionals, and by making our device be easy-to-use (requiring little technological knowledge from the user), we should be able to assuage the concerns of the older generation. 

The main appeal of the device for this second group is that the device is able to help people with little exercise experience workout safely and effectively.  For the elderly, safety is an even more important factor, since too-intense exercises could result in serious harm. 

Additionally, based on current trends of people with little exercise experience being willing to try new fads and diets, we think the Future Healthy You would be relatively easy to market to this group.  Also, people trying to get in shape for the first time are often intimidated (4). to go to the gym if they have no exercise experience or are self-conscious about their body image.  The private nature of these workouts would be reassuring to this demographic.

To this group we would market the convenience of going at your own pace and the efficiency of the workouts. 


Brainstorming of specific solutions for Future Healthy You.
Brainstorming of specific solutions for Future Healthy You.

Please click below to download Future Healthy You task analysis.

The Future Healthy You system will be part of a smart home in the future.  In the future, humans will still need to stay in shape, however, going to the gym will be a thing of the past.  Instead, the design team envisions that users will work out in the comfort of their homes. 

Instead of needing to use various types of machines, barbells, dumbbells, weights and cardio equipment, there will be an elegant, all-in-one solution that meets both resistance and cardio requirements will be seamlessly integrated into a user’s space.  This will allow the user to meet all of their work out needs in an efficient manner.  In addition, since most users have limited experience with exercise physiology and human performance science, a specialized sensor system will be part of the system which is managed by trained physicians to automatically provide the most efficient and targeted training possible. 

The Future Healthy You system consists of three main parts:

  1. An interactive screen and resistance system mounted on the wall
  2. A treadmill like surface built into the floor that comes up out of the floor as needed
  3. An ingestible nanosensor system that monitors every aspect of the user’s performance and provides feedback via machine learning in a seamless manner

This system is our vision for how future humans will exercise their bodies in approximately 50-75 years.  The core of the Future Healthy You system is the specialized nanosensor drink, which is prescribed by a doctor and ingested.  By including a physician in the design, the user can be ensured that they are eliminating risk and getting the most effective information provided to them.  The nanosensors will enable the system to track every aspect of the user’s performance and use big data and machine learning to provide salient feedback for the user.  The nanosensor drink is also the reason the system is estimated to be 50-75 years in the future.  While other aspects of this system will most likely exist much sooner, the design team does not believe nanotechnology will have matured to point needed until the timeline stated.

As part of the system design, the team performed a high level task analysis on various aspects of the system.  Some aspects of the system did not need extremely detailed task analysis (such as picking up prescription and taking it) and are noted as such in the attached excel document.  For each task, the team looked at the whether any information, decision, action or analysis was needed and noted them in the task analysis excel sheet.  Finally, for each task, the team estimated the level of automation using a 1-10 scale and noted then in the task analysis excel sheet as well.

The system will require a high level of machine learning to provide users with the feedback imagined.  If anything were possible, then our future nanosensors would be a one-time drink users ingested that would monitor every cell in their bodies.  The sensors would provide information for calories burned,  hydration/caloric requirements, injury, form, power output, and any other pertinent information that may be required.  Machine learning would then take all of this data and provide users with highly tailored, specific feedback such as:

  • Increase/decrease resistance
  • Increase/decrease speed
  • Hydrate
  • Change form, de-load resistance or stop exercise to prevent injury

Since the system would be part of a highly connected network, the system could automatically perform some of the changes listed above, such as increase/decreases loads or speeds. 

The inspiration for this system came from two physical systems that are already in development at this point in time.  One is called Tonal and the other is called Mirror, shown below, respectively. 


For the Future Healthy You, there are two modes of communication between the user and the device.  First, the device itself has a screen to convey information to the user during the workouts.  Since exercising often requires the use of your hands, this screen would also be able to process and provide vocal communication. 

The display would be able to be tailored to meet a user’s specific needs as they use the device.  For example, the color scheme can be changed to suit the user’s preferences.  There would also be daytime and nighttime modes that make it easier for the device to be used at any time of day. 

An example default screen is shown below.  Here, the user can view information about what exercise they are doing, how far along into that specific exercise they are, and how far along they are in their overall workout.  There is information about music at the top of the screen, as many individuals like to listen to music while they exercise.  These features can be hidden in the settings menu to make the display less complex for those who are overwhelmed by multiple sources of information. 

Some people, especially if they are familiar with the exercises and have no need for the videos, can select the heart icon on the left in order to display their health data (they can also provide a voice command if they want to switch between screens mid-workout). 

This screen (shown below) displays relevant health information in a format similar to a bar graph.  As the user is exercising, the display would show the current physical states being registered by the nanotechnology.  Since the units are different for the various health states, they are listed above each pillar in the plot, rather than along the vertical axis.  If the health data is in the acceptable range (the acceptable range is determined using information available to the computer), then the data is presented the color green.

However, if a parameter is outside of the acceptable range, the pillar is displayed in red, has a warning sign (an “!”) appear on the plot, and a warning message appear below the plot, as well.  During this, the workout would also release an auditory warning, if that is what the user requests.  It is important to have multiple representations of potential danger on the screen and via speakers to avoid liability and to best help the user have a safe workout.  The user would be notified that the workout is automatically being altered to try and help get their health readings within safe levels. 

If there is a warning due to health data, but the human knows they are alright, they are able to vocally override the changes to their workout, following the written and auditory instructions coming from the device.  The machine then takes this knowledge and adjusts its parameters accordingly, using machine learning.  It also makes a note in the app so that the information can be shown to a physician at a user’s annual physical.  Sometimes sensors have glitches, or a human just has a different than normal reaction to exercise.  This ability to override the machine despite health warnings allows the human a sense of control over their workouts, while still letting the machine do the majority of the work.  Since humans are trusted to be responsible for their own safety when working out using exercise equipment, we thought this was an acceptable feature to include in the device.  It is hoped that this will help reduce errors and allow the device to be tailored to each user.  

The user would be able to control their workout schedule and view their health data history on their phone (this information would also be available via the device itself).  

Through the phone, the user would get alerts a short time before they are scheduled to work out (this is so the user knows whether or not if they will be able to actually exercise that day). 

The user would either accept or reject their schedule workout.  If they accept the workout, they proceed to go to the Future Healthy You device and begin the workout. 

If they reject the workout, they are prompted to schedule a new time to work out (if for some reason they do not want to reschedule, they can opt out of this feature, but it is the default setting in order to encourage keeping people to their workout goals). 

For rescheduling, they go through three pages in the app.  The first page is the calendar page, where the user selects a new date.  They do this by tapping the date they want to reschedule to.  All current workouts are highlighted by colored circles, and when a new date is selected, that is also shown in a colored circle.  In the example below, the user works out four times a week, and so there are initially three dates shown on the screen. 

The user selected Tuesday, March 11 as the new workout date.  

From here, the user clicks the right arrow next to the calendar icon near the top of the screen (highlighted below). 

The user is directed to schedule a time for their new workout. 

The arrow is tapped again.

Now, the user explains their reasons for skipping their workout.  Though this is not necessary, any information they provide can be used by the Future Healthy You system to improve the workout schedule using machine learning.  The drop-down menu is also updated based on previous responses that the user has provided, to make it easier and quicker for them to provide feedback. 

In the example shown below, the user selects “Other”, and types an explanation into the text box. 

The user can also access their health data within their phone.  There are a number of categories of information presented.  The screen below shows information regarding the number of steps of the user.  This is shown on a plot with the vertical axis being the number of steps and the horizontal axis being the date.  The user can choose to “friend” people who also use the device, and pick if their data is available to those friends (the user can give permission for all or only some types of their data, in case there is a limit to what they are comfortable sharing). 

The user can pick which data is shown in the plot by flipping the switch next to a username.  In the figure above, the data for Jenna Johnson is shown in yellow, and the information for Carla Summers is shown in green.  By selecting on a data point, the exact number of steps is shown. 

This sharing feature allows the user to still feel like exercising is a way to be connected with people, even if they are not physically near them.   Since this data-sharing feature is optional, it is also avoids making users feel forced to compete. 

The figure below shows the sample menu of health data available to the user.  Though the presentation of data might be a different type of plot, the overall layout of the page should remain the same.  This page is reached by swiping right in any of the health screens, or by clicking the menu symbol in the top left corner of the screen. 

When the user opens the app on their own for the first time after installation, the phone will automatically walk the user through a brief and efficient tutorial on how to use the device. 

The device itself will be set up with the help of either a virtual assistant or an in-person employee, depending on the preference of the user.  Either way, the user will be guided through the steps on how to use the Future Healthy You device, and will have help setting up their initial health goals and beginning their workout plan. 


To get to our envisioned system from where we are currently, the real work lies in the nano-sensor field. As stated previously, this is what we see taking the longest to develop. Currently there are no nano-sensors that can remain in a human, fully powered, detecting everything outlined above, and transmitting data. However, scientists and engineers continue to push nanotechnology to new heights and the team believes, given enough time, everything outlined above is achievable. The first challenge will be making a nano-sensor that can stayed powered on and in the body. Once that is done, the next challenge will be making the nano-sensors connected and able to transmit far enough to provide useful data outputs. Finally, the actual sensors will have to be refined to provide the data required as outlined above. In 50-75 years, we may actually be there.

First, it is important for the user to provide knowledgeable consent to what they are agreeing to.  Since we are assuming nano-sensors will be a newer form of technology, there is not going to be much general knowledge that the public will automatically know.  The user must not only consent to the health risks of the procedure (even though these risks should be minimal, there is always the possibility for error), the user must also consent to the fact that data about their body is being collected and wirelessly transmitted.  Though there isn’t the risk of “hacking” the user’s body and altering their behavior in any way, the health information collected could become available to others. 

It is also important to make sure that the nano-sensors are considered safe for human use.  This would require extensive tests before ever getting to the human trial stage.  When the device is released for public use, it is important that the Future Healthy You producers are able to confidently provide evidence that there is minimal health risk to any users. 

Since the information about the health process is so critical, it is important that the company be open with doctors about how the nano-sensors work.  This is more important than “trade secrets” or “intellectual property”.  To protect the company’s profits, doctors would ideally sign a confidentiality agreement saying they would not share any information about the way the technology works, provided that no harm is being posed to anyone. 

There are other ethical factors to consider when designing medical devices and exercise equipment.  The device has the potential to promote body image issues.  Though this isn’t the devices intention, that is something that doctors should be looking out for when completing their evaluation.  This is not the responsibility of the company to address, but their ads should focus on the health benefits, not on the weight-loss aspect (though this device has a weight-loss option, it is intended to be for health-driven purposes, not body-image ones). 

Finally, it was discussed in the ENP 162 course that elderly people struggle to get sufficient social interaction.  For some, it may be that the Future Healthy Human device promotes them to avoid socializing by attending an exercise class.  Again, this is not the responsibility of the company, but is something that doctors should be asked to consider before granting a user access to the device and nano-sensors. 


Since the Future Healthy You system involves the medical procedure of putting sensors indefinitely into the human body (though not a surgery, this process still would count as a medical procedure) there are many ethical issues to consider. 

First, it is important for the user to provide knowledgeable consent to what they are agreeing to.  Since we are assuming nano-sensors will be a newer form of technology, there is not going to be much general knowledge that the public will automatically know.  The user must not only consent to the health risks of the procedure (even though these risks should be minimal, there is always the possibility for error), the user must also consent to the fact that data about their body is being collected and wirelessly transmitted.  Though there isn’t the risk of “hacking” the user’s body and altering their behavior in any way, the health information collected could become available to others. 

It is also important to make sure that the nano-sensors are considered safe for human use.  This would require extensive tests before ever getting to the human trial stage.  When the device is released for public use, it is important that the Future Healthy You producers are able to confidently provide evidence that there is minimal health risk to any users. 

Since the information about the health process is so critical, it is important that the company be open with doctors about how the nano-sensors work.  This is more important than “trade secrets” or “intellectual property”.  To protect the company’s profits, doctors would ideally sign a confidentiality agreement saying they would not share any information about the way the technology works, provided that no harm is being posed to anyone. 

There are other ethical factors to consider when designing medical devices and exercise equipment.  The device has the potential to promote body image issues.  Though this isn’t the devices intention, that is something that doctors should be looking out for when completing their evaluation.  This is not the responsibility of the company to address, but their ads should focus on the health benefits, not on the weight-loss aspect (though this device has a weight-loss option, it is intended to be for health-driven purposes, not body-image ones). 

Finally, it was discussed in the ENP 162 course that elderly people struggle to get sufficient social interaction.  For some, it may be that the Future Healthy Human device promotes them to avoid socializing by attending an exercise class.  Again, this is not the responsibility of the company, but is something that doctors should be asked to consider before granting a user access to the device and nano-sensors. 


For the purpose of this project, we focused mostly on the machine learning and sensors components of the home gym; however, as discussed in the introduction section, diet is also an integral part to achieving health and wellness goals. A future direction for this project would be to focus on how futuristic applications of machine learning could be applied to nutrition. In addition, our gym system still requires users to make the commitment to working out. We attempt to do this by using machine learning to produce workouts that they’re most likely to enjoy; however, we did not discuss ways to motivate users who are not interested in working out to begin with. A future direction would be to study applications of machine learning on how to get users into the gym that normally would not go to the gym. Our in home gym is confined by traditional gym experiences (lifting weights, treadmills, etc.) and we did not apply futuristic applications like virtual reality to creating alternative exercise experiences (f.e. hiking or playing sports). Lastly, we did not focus on the types of workouts themselves due to how individualized and specific they will be for each user.



Blog 5: Machine Learning

Machine learning refers to computer programs in which the programs have the ability to learn. Computers are able to do this through analytical and statistical components written in the program. In supervised machine learning, a program is given a set of training examples through data and the program is then able to give conclusions with new data. In unsupervised machine learning, a program is not given the set of training examples and must draw conclusions on its own.

The following is an overview of applications of machine learning. 

1. Facial recognition and imaging software are being developed in order to help physicians in identifying diseases like cancer. 

2. Machine learning can detect fraud better than humans because they are able to better process large amounts of information. Machine learning is able to understand users purchasing habits and detect anomalies that may be fraud.  

3. Recommendation engines are ubiquitous in consumer products. For example, Netflix recommends to users new shows, UberEats recommends new restaurants to try, and Amazon suggests products that their users may be interested in. 

4. Self-driving cars use machine learning to improve the safety of its driving. 

5. Social media analyzes a given user’s activity to generate content for them. For example, Instagram has a discover page that’s created based upon a user’s prior activity. Facebook analyzes users’ prior activity in order to generate advertisements that are targeted toward specific users.

Looking forward, machine learning’s capabilities are only going to expand. Companies are going to be able to segment their market more finely, which will result in advertisements and products that are more specifically geared towards specific groups. This will result in a user experience with greater personalization. Also, machine learning will be able to use big data in order to better predict outcomes. This could have applications with forecasting stock prices, the weather, or political outcomes.


Blog 4: Neural Networks

Neural networks are integral to the development of artificial intelligence and machine learning. Neural networks in machines closely resemble how humans process information. In humans, there are millions of neurons that send electrical impulses and communicate at synapses. 


Neural networks are modeled from human neurons. In computer science, neural networks interpret data by categorizing data. Neural networks are composed of nodes. Nodes are instances where processing occurs, similar to the role of synapses in humans. A node uses sensory input and data that assigns a weight to each node in order to interpret a given input. All of the nodes are then summed and put into the systems activation function to determine if and to what extent a piece of sensory output should influence the final output (similar to how humans either sense a stimuli or do not sense the stimuli) to determine if and to what extent a piece of sensory output should influence the final output.

Neural networks are able to perform more advanced functions when the number of node layers increases. Normally, these neural networks are written in a feature hierarchy. A feature hierarchy increases the level of complexity of abstraction as the data goes deeper into the neural network. As pictured below, in facial recognition, the neural network may begin with individual pixels in the first layer and end with human faces by the third layer. 

Neural networks can also be applied to big data. This application can particularly be useful as scientists grapple with how to develop systems to interpret the mass amounts of data now available to us. For example, neural networks could be applied to forecasting stock prices, predicting disease outbreaks, and identifying criminals through face detection. Although neural networks have very useful applications, there are some broader implications that we must consider. For example, what if facial recognition misidentifies certain races as criminals at a higher rate; this would lead to a greater amount of discrimination. In fact, this is a common concern of facial recognition software being implemented (read more here).  


Blog 3: Signal Detection & Information Theory

Signal detection theory measures the ability for a user to differentiate a specific signal from noise. For example, an air traffic controller needs to be able to differentiate an airplane (specific signal) from large clouds (an example of a potential noise). 

The differences between automation and human inclusion in system design. Source.

Automation has the potential to be incorporated into signal detection theory. If there is an environmental hazard, then a given system has a mechanism to pick up this hazard (usually via sensors). Next, these sensors can either go straight toward automation or to the human by alerting them of the environmental hazard. For example, in an AC system, a thermostat will sense a temperature change from within the set temperature and then will signal to the system that it needs to work in order to increase or decrease the temperature. If this system did not work via automation, then the AC system would have to alert the user of the temperature change via its display and then require the user to take action in order to activate the AC system. This would be very cumbersome if AC systems worked in this manner. 

False positives and false negatives pose danger when they occur. Source.

In addition, there are four different outcomes from specifying a signal versus noise. The user can correctly identify a signal that is present, the user can identify a lack of a signal that is present, the user can identify a signal that is not there (false positive), and the user cannot identify a signal that is there (false negative). Let’s think about this within the scope of the medical field. If there were a doctor performing an ultra-sound looking for free fluid in the abdomen, then the doctor could: 

– correctly identify free fluid in the abdomen 

– correctly identify a lack of free-fluid in the abdomen 

– identify free-fluid in the abdomen that is not there (false positive) 

– fail to identify free-fluid in the abdomen in the abdomen (false negative) 

With regards to which outcome is the most dangerous, the false negative has the potential to cause serious harm. This is because the false negative leads to the doctor missing a potential catastrophic diagnosis. 

Signal detection theory is an important aspect when designing systems because it ensures that systems are designed with the idea that humans are imperfect. There are a range of factors that can affect a user’s ability to detect a signal—like level of fatigue, physical abilities, and environment—so by designing systems that accurately alert users, we are able to increase the safety of these systems. 

« Older posts

© 2021 ENP 162

Theme by Anders NorenUp ↑