Portfolio Assignment #4 – Driver’s EDward

Zaila Foster, Neil Haigler, and Fallon Shaughnessy

Product Brief

Driver’s EDward! This driver’s education providing social robot will teach student drivers, make their learning less stressful, and step in to protect them if they end up in a dangerous situation. We expect this to be a feasible reality in 20 years when all new cars and most cars on the road are self-driving. Our social robot will be purchased in the form of a small box that plugs into the desired car and will interface with the car’s self-driving capabilities and heads up display. The heads up displays, or HUD, will still show important information in the center of the student’s field of view, but it will also show our robot’s avatar. As a social robot, our system will have to have something that resembles a human for our user to interact with, and in this situation we feel that a non-physical representation of a robot in the HUD is much safer than an actual robot in the passenger seat, or even an actual human in the passenger seat. With a robot teacher in the HUD, the robot can express the fear or joy that the student’s driving evokes. Besides just showing emotion, the robot will be able to be a constant source of encouragement and reliability for the student. As the car is already self-driving, our robot can jump in to correct for any mistakes that get to the point of being dangerous, and the monitor of the vehicle can be used to show the student what they should/should have done in any situation. All in all, we feel that a well programed robot could be far better than any driver’s ed instructor, and can offer the standardization, reassurance, patience and succinctness that we all wish we had in our instructors while we went through this stressful rite of passage. 

Background

We set out to make Driver’s EDward for many reasons. Learning to drive is not a standard procedure, almost every state has a different set of hoops to jump through to advance from getting your learner’s permit to getting your driver’s license. There also appears to be just as much variation in the instructors as well. After several interviews with people who had gone through driver’s education with a human instructor, we had a list of instructor horror stories ranging from unclear instructions, cocaine use sale and arrest, to violent outbursts and unprofessional behavior. In direct response to the issues we found through interviews, we believe that a robot will be able to give clear auditory and visual instructions, that a robot will not set up a driver’s ed school as a front to use and sell cocaine, and that a robot will not be reduced to screaming and punching their dashboard when a student makes innocent mistakes at 5 miles per hour in a parking lot. 

We also see that student drivers starting with no experience is stressful and dangerous enough. While drivers from 16-20 have the lowest rate of accidents of any age group, they also have the highest rate of fatal accidents. This early stage of development and reflex learning, of turning higher brain decisions into automatic life-saving responses that require no thought, takes time and practice. Driving is a very unnatural act as far as our evolution is concerned, and it takes time for us to turn something like slamming on the brakes or swerving to avoid collision to be a response that we do not need to think about doing. In a lecture and interview with Dr. Divya Chandra, a trained pilot and expert on standardization of instrument procedures and pilot human factors, she explained that one of the most important things that an instructor does is letting a student make mistakes in a safe way. This is because it teaches the student why certain rules are followed, and it allows them to experience mistakes and have raw experience correcting them, which builds life saving reflexes. We believe that having a robot be able to take the wheel at the last second to protect and correct the student will be not only safer, but more educational since it will be more patient with students and allow students to make mistakes in a perfectly risk calculated way while always being able to correct when needed.

User Physicality (Limitations, Differences, Aspirations)

If we are to make a robot that will be interacted with in a high stress situation, it must not cause physical stress. The physical limitations of humans have mostly been taken into account in the design of the car. However, although the robot will not be physically interacted with, it could cause significant strain on the eyes, and it could distract the driver more than teach them. We took into account the drivers field of view and where they would need to be able to see, and made the avatar of our robot visible but unobtrusive. We also designed it to be the right brightness to prevent eye strain while still being easily visible. We had to do extensive research on human peripheral vision and the nature of rods and cones of the eyes. Human peripheral vision is primarily reliant on rods, which are very sensitive to contrasts of black and white, as opposed to cones, which are clustered mostly in the fovea centralis, or the part of our eye responsible for the center of the vision. While the center of our vision and our cones are very good at detecting colors and color contrast, our peripheral vision is better suited to black and white contrast and motion. Because of this, we decided to make our robot avatar outline black and white because it is in the peripheral, allowing it to be passively observed. However, other research shows that amber and red are better for getting attention in the event of an emergency, and will be part of any alert system or intervention. To achieve a good calming balance of mood and color, we decided to make the outline of the avatar be black for earlier stated contrast, but for the body to be blue. 

User Psychology (Desires, Constraints, Fears

Learning to drive is a very stressful experience for everyone involved, but especially the students. Our human-robot interface was designed to be trust earning, supportive, and empowering from the very beginning. We had to take into consideration not only a student’s mental misgivings about a robotic teacher, but also their goals and their foundation for future learning. We would need to first focus on fear and trust building. We believe that any rational person is going to need to be assured that their teacher is competent, and this will not be the same as mom or dad teaching their teenager to drive. Said student will have grown up with their parents driving them around, and their parents will hopefully have earned their trust. For our robot to earn trust, we decided that the best opportunity for this would be for driver’s eduction, normally taught online, to be taught in the car. This will allow the robot to be perceived as a completely competent driver from the very beginning, and not just an emergency system that might intervene clumsily. This will also allow the robot to show rather than tell when teaching, as students learn best when more channels of input and explanation are utilized. Rather than pointing at a whiteboard or a picture on a screen in a classroom, the robot and its monitor will be able to physically demonstrate correct driving, and use the screen to display hypotheticals and accident scenarios. If there is an encounter where the student and their robot instructor encounter someone else’s poor driving that involves honking and blaming the innocent student robot team, something that could terrify and induce crippling stress, the robot can use the experience as a teachable moment, and show what the other driver was doing wrong, and help the student understand why emotional driving and road rage are dangerous.

During the next phase we needed to consider the psychology of learning and stress. These students would have had time to learn to trust the instructor and would now be practicing everything they had learned. This will bombard the student with stress, embarrassment, and fear, as well as testing their determination and abilities. We would need to design a system that achieved a healthy balance between stress and learning opportunities. Rather than attempting to predict what that perfect balance is, we elected to use machine learning to customize every student’s robot to achieve that balance. Sensors will be able to monitor pupil dilation, heart rate and other stress indicators which will allow dynamic learning and customization of the system. The system will be able to test different attitudes and languages for intervening and correcting mistakes and compare the amount of stress induced with the promptness of response. Over a long enough time the system will also be able to compare the language used to how well the student remembered the lesson and be able to weigh stress and the importance of the lesson or mistake made. 

As students begin to remember lessons and learn from their mistakes, the system will give them encouragement and praise. While research shows that automated and “canned” praise is less meaningful than praise from a real person, our system will be able to praise students with the normal “good job”s, but also in a more meaningful way by recording and displaying student improvement. We believe that the statement “you relied on the autopilot 50% less today, and perfectly navigated that rotary” will mean a lot to our students. This will build confidence and lead to the student getting more comfortable and needing the robot less.

The ultimate goal is complete autonomy for a student who does not need a robot looking over their shoulder all the time. While most social robots are designed to ensure more interaction and invite use, ours will start like and eventually fade out as the student gets more proficient. The robot will still be able to jump into presence and control at any dangerous moment, but the goal of a driver’s license, at least according to the law right now, is to be able to safely drive a vehicle that has no automated systems, and students need to be prepared for that. Weening the student off of the Drivers EDward will ensure that they are not over dependant on automated systems, an issue that  Dr. Divya Chandra mentioned. She expressed the dangers of pilots using autopilot for every flight and how dependant pilots get and how quickly their skills atrophy. Our goal was to make a teacher, not an autopilot, and need to be constantly working towards that goal. By going through these three stages of learning, we believe that we will help drivers be safer, more prepared, and ready to drive on their own. We believe that Drivers EDward is the future of driving!

Task Analyses

Link to full page:
https://drive.google.com/file/d/1Fgz5FijNz6EymXm4QH4eyymIDm_OF9UP/view?usp=sharing

Solutions

Sensors

Our system utilizes multiple sensors in order to guide the driver. Cameras for both the front and back of the vehicle give Driver’s EDward the best visibility to provide instruction. Radar is used to detect other cars or pedestrians, as well as stationary targets like parked cars and buildings. This is necessary in order to add safety measures when educating new drivers, as well as to direct them in each task. Lidar measures distance, and will help our robot to measure distances. In conjunction with radar, this permits Driver’s EDward to make educated decisions about the environment of the driver, what’s in it and how far artifacts are from the car.

Processors

Our social robot comes in a small box, plugs into the USB port and then shows up in the form of an avatar through heads up display. This robot will be shown within the central vision of the driver. Our robot has the following key features:

  1. Sensors: cameras, radars, and lidars that inform the robot’s decision making
  2. Artificial Intelligence: two way communication, auditory controls and informed advice based on learning
  3. Emotive Display: the HUD provides a digital avatar that resembles a face that reacts to situations and interactions with the driver

Emotive displays

Car Set-Up

Audio Message: The system is loading. You will hear instructions for the pre-driving shortly. Please do not turn on the car until you are instructed to do so.

Welcome Screen

Audio Message: Hi Jane, Welcome to your driving lesson. Now that you’ve completed your pre-driving check, we can get started.

Switching Lanes

Audio Message: Now we’re going to practice switching lanes. The steps for this include 1) turning on your signal, 2) checking your mirrors, 3) checking your blind spot by looking over your shoulder 4) changing lanes if it is safe, 5) and lastly, turning off your signal after completing the lane change. Are you ready to start?

Parallel Parking

Audio Message: Now we’re going to practice parallel parking. Signal and come to a complete stop next to the highlighted car.

Audio Message: Signal and turn your wheel all the way to the left.

User Walkthrough

Link to full page view: https://drive.google.com/file/d/1U3zRfYPAMMaGftW6gus2xrCfgta5YRaA/view?usp=sharing

Future Directions and Limitations

A significant limitation for Driver’s EDward is that it requires people to fully adopt the idea that self-driving vehicles are safer than human-operated vehicles. Moreover, it requires further development of self-driving cars. Currently, the state of “self-driving” cars is advanced assisted driving, and a person with only a learner’s permit would not be legally allowed to operate a “self-driving” car by themselves. Another limitation of Driver’s EDward is that it is built with the expectation that in the future, all cars will have a digital display and be compatible with our software. 

Moving forward, Driver’s EDward should be able to replace driving instructors completely. This process will feel strange at first but will overall improve safety. Driver’s EDward will allow for a standardized method of teaching driving and can be updated frequently as the technology in vehicles changes.

1 thought on “Portfolio Assignment #4 – Driver’s EDward

  1. I really like the level of detail taken into account in designing the visual representation of Driver’s EDward, specifically the decision to make him a black and white figure in the periphery so as not to distract the driver’s field of vision. I’m curious how this technology will be implemented for a driving student with a learner’s permit, since they are only allowed to drive alongside a driver’s license holder above the age of 21. It may be a sensory overload for a driving student to experience interaction with Driver’s EDward as well as a human passenger alongside them.

Leave a Reply