ENP 162 Portfolio Pieces

Final Project: GiggleBots & Microbits

Mental Model API: 

Our pod first agreed to have separate code bases for the controller and receiver microbit, as the logic in the implementation is different in both. Then, we decided to have a control system focused on using both button A and button B to separately accomplish the two tasks of passing control from room to room and activating a master mode where a controller can drive all the Gigglebots at the same time. Our general approach was to have the controller in each room change the channel on their microbit to match the channel of the Gigglebot entering their room. Since we limited each Gigglebot to be on either channels 4-7, a controller can easily cycle through the channels to find the right one by pressing Button A, which increments the channel by 1 until it reaches 7 and then resets to 4. This is an efficient and reliable solution because it is just a button press and there are no external factors that could interfere with it. In addition, button B serves as an “on/off” switch for master mode. We used a boolean value in our architecture to keep track of whether master mode was activated or not. If it is not activated, then pressing button B will activate it by changing the controllers channel to #8 and every Gigglebots channel to #8 as well. If master mode is activated, then pressing button B turns it off and resets all the Gigglebots channels from #8 to its previous one, resuming all previous activity and control. Our architecture of separating master mode from passing control fixed any previous issues we were facing. 

Task Analysis: 

Instructional Task Analysis:

System constraints/assumptions:

We assumed that the user has already:

  1. Uploaded code to the Gigglebot and controller.
  2. Placed microbits in the controller and Gigglebot.
  3. Placed their Gigglebot in the place they desire to drive it in.

 We also have assumed there will be one radio channel for each controller/Gigglebot pairing. 

Task allocation/Automation Strategy:

We decided that the power on task should be left to the user to ensure intentional use. Perhaps turning the controller on could power up the Gigglebot, but we are working with analog switches and battery connections which would warrant a mechanical control.

Selecting radio channels is a task that can be automated, the system could test to see what radio channels are currently in use and then select an unused channel. 

We decided to automate the handoff process when activating master control mode. When B is pressed, the users controller sends commands to any Gigglebots with our code and makes them listen to only the master controller. When the controller gives up master mode, all Gigglebots automatically return to their previous radio channel and can be controlled by their paired controller. 

Driving could be a combination of manual and auto control. Gestures or button controls could be used to send a command, and then the Gigglebot could then complete the command with the context of the sensory environment (if we had sensors in use).

As indicated, we were only able to automate one process due to time and development constraints. The chart below shows the levels of automation for each step of electing and using master control. Turning off the master control would be inverse, where upon pressing B, the controller would tell all Gigglebots that master control has been relinquished, each gigglebot would then return to its prior radio channel. 

Software

We created the software by using the makecode online editor to write the codebases for the controller and receiver microbit. The following is an analysis of the important blocks that enable a controller and receiver to correctly communicate with the Gigglebot in their room or become the master of all of them. 

Controller & Receiver Code

Initializing Variables

Controller/Receiver Code: This block above is one of the more trivial pieces in our code, but still serves importance as it initializes all the Gigglebots to be on channel 4 and create a separate state for master mode by controlling it with a boolean.  After all the Gigglebots are set to 4, we could manually change the radio channel by pressing button A so we were all on unique channels. 

Controller/Receiver Code: This block above enables the passing of control from room to room. Since our design is the controller changing their channel to match the channel of the Gigglebot entering the room, the following code enables the controller to cycle through channels 4-7. The conditional says that if a controller is already on radio channel 7, then reset it to 4 in order to search for previous channels. Otherwise, the radio channel is incremented by 1 allowing for a smooth and efficient cycle. 

We had a few difficulties getting to this final implementation. At first, we aimed for an automatic solution where the Gigglebot can change its channel from the signal strength of the microbit. Therefore, whenever a Gigglebot enters a room, it can automatically change its channel so the controller in that room can drive it. After realizing this was not a feasible solution given signal strength may not be reliable due to interferences, we decided to go with the more manual approach where the controller in the room changes the channel to match the channel of the Gigglebot entering the room. The difficulties we faced were smaller ones which required debugging to make sure the channels incremented correctly and that we were cycling through only 4 channels. 

Controller Code: This block above is implemented to modularize master mode. As shown in the code, if button B is pressed and it is currently in the master state then it will send out the number 0 to all the Gigglebots. When the receiver microbit receives the number, it will check if it is currently in the master state. Since the controller is in the master state, the receiver will also be in the master state as well, so it will reset to its original channel and turn off master mode by setting the variable to false. The controller will also set the master mode variable to false to ensure both the controller and receiver are in the same state.

The for loop above handles the case where button B is pressed and master mode should be activated for that controller because it was previously off. It iterates through all the different channels (4-7) and sends out the number 0, which enables the controller to communicate with each Gigglebot on separate channels. After this, the master mode variable is set to true in both the controller and receiver code to properly update the state, and the controller is set to channel 8 as well as all the other Gigglebots.

Essentially, this chunk of code properly keeps track and updates the master mode state (On/Off), and enables one controller to change every gigglebots channel to the “master” channel number 8, giving that controller the ability to drive all the gigglebots. 

Receiver Code: This block of code above is implemented for the receiver. As explained in the controller section, when Button B is pressed on a controller, the number 0 is sent out to all the Gigglebots. The logic in this implementation is consistent with the implementation in the controller where if master mode is off, then pressing B activates it and will change every Gigglebots channel to #8. If master mode is on, then pressing B will turn it off and also reset the Gigglebots channel to its previous one. 

Overall, the general structure of the implementation for master mode is the same for both the controller and receiver in order for them to both be in the same state, and automatically change each Gigglebots channel to match the channel of the controller that pressed button B. Leveraging for loops and boolean variables enabled us to create a fast, automatic solution to this task.

However, we had the most difficulty implementing master mode and arriving at this solution. There was a lot of discussion on how we could approach this and we had trouble just coming up with an implementation for it. At first, we tried triggering master mode with button A where reaching channel 8 would change the channels of the other Gigglebots. However, master mode would always be triggered if a controller on channel 7 is trying to get to channel 4 because button A can only increment, not decrement. Nonetheless, we eventually realized that we could use button B to separate master mode, which gave us a lot more direction and a better architecture. 

Controller Code: In addition to the implementation of the control system, we all agreed that tilting the controller will determine how the Gigglebot changes the direction. This block of code below associates a tilt right on the controller as a right turn, which we figured would be the most intuitive maneuver. Initially, our group faced a lot of problems with the sensitivity of the turns, but addressed this issue by trying to manipulate the acceleration of the motors. In this case of turning right, we wanted the right motor to be the slowest in comparison to the left motor in order to create a more gradual turn. 

UI/UX:

We aimed to develop a controller that is minimalist, sleek, and effective. The shape of our control affords holding. Since it is designed like other gaming controllers, novice users will quickly and intuitively understand how to hold it. There are no instructions because it does not warrant them; the design communicates well enough on its own. There is a battery compartment located on the backside of the controller which is convenient but also helps give the controller some weight in user’s hands. 

We wanted to set our controller apart somehow from other competitor controllers. We knew we wanted to 3D print ours, but needed some way to differentiate ourselves. The answer was poly smooth filament. The material is called polyvinyl butyral, a clear polymer that provides the shiny, smooth finish. After printing the controller, we placed it in a polishing machine. A nebulizer fills the chamber with a mist of isopropyl alcohol, which coats the surface of the print. The alcohol essentially melts the surface of the print, which smooths the surface and results in a glossy finish. But so what? Why does this matter?

Don Norman, one of the fathers of human centered design, presents a lot of interesting findings on emotion and design in his book Emotional Design. One of them being attractive products tend to work better than unattractive products. He writes, ”Attractive things make people feel good, which in turn makes them think more creatively. How does that make something easier to use? Simple, by making it easier for people to find solutions to the problems they encounter” (p.19). Now back to our controller. By making our controller attractive (i.e. smooth in the user’s hand, shiny blue, etc.), a novice user is now in a better frame of mind if/when there are complications when trying to control the Gigglebot. The user will think creatively, not narrowly, if/when an issue arises. That was the whole concept behind our design and why we went to such lengths to achieve its shiny smooth appearance!

Naive User Walkthrough

  1. Turning Gigglebot on/off:

Place the power switch in the “on” or “off” position

  1. Turning controller on/off:

Plug the battery cable into the port on the controller. Unplug to power down.

  1. Pairing controller to Gigglebot:

Look at the LED screen on the Gigglebot and ensure that the displayed number matches the number on the controller LED. If it does not, press the “A” button until both numbers are the same.

  1. Driving 1 Gigglebot with controller:

Keep the controller parallel to the floor and tilt the controller in the direction you want your Gigglebot to move. Be careful not to tilt the controller more than 15 degrees; Gigglebot is very sensitive!

  1. Driving multiple Gigglebots at once:

Your controller can also drive multiple Gigglebots at once. To enable this “Swarm” mode, simply press the “B” button on the controller. All LED displays will display an “M”. To disable this mode, press “B” again.

Overview and Video: 

A general overview of our system is an inserted microbit (“Controller Microbit”) in a “gaming-like” controller is programmed to send commands to the microbit (“Receiver Microbit”) on the Gigglebot , enabling any user with the controller to drive it. If both the controller and receiver microbit are on the same channel, the receiver microbit parses commands from the controller microbit, impacting the movement or current state of the Gigglebot. Our system interacts within the pod seamlessly because our pod uses the same source code for both the controller and receiver microbit, giving the ability of any controller microbit to communicate with any receiver microbit if they are on the same channel. This avoids any errors in our deployment given that the implementation is the same.

The two main aspects of the Human-Machine System-Design for this project was coding the controller and receiver microbit, and making a physical controller for the controller microbit. We created two separate code bases for the controller and receiver microbit, and implemented our solution in MakeCode Editor using blocks. For more specific details on our implementation, check out the Software section above. In regards to the UI/UX component, we 3D printed a controller and added a poly smooth filament. In the following video, we explain the goal of this project, show our pod working together to come up with a solution, and finally showcase our pod’s Gigglebots in action. Enjoy!

Reflections and future directions:

As a team, and as a pod, we all had strong communication throughout the project. It was key to get everyone in the same room early on to discuss the control logic. I think we came up with a pretty straightforward way to hand off control and to enter swarm mode. We believe our controller design set us apart from the other teams – quite a bit of work was done to achieve the final product. In terms of future directions, we would have liked to fine tune the controls a bit (the controller was very sensitive). If we had more time, maybe we could have played with that aspect a bit more. But overall we are happy with how the project came together and with the communication within our pod!

References

Norman, D. A. (2004). Emotional design: Why we love (or hate) everyday things. Basic Civitas Books. 

Portfolio Automation Project: 

On my personal website, I decided to use a service called IFTTT.com to help me produce a text document version of my blogs, stored in Tufts Box, whenever I publish a new blog. This will be very useful because it acts as a safeguard in case the server for sites.tufts.edu is down and I need to access my blogs. Additionally, if I want to submit a writing sample to a job, I can easily access my work in a suitable document. Lastly, Tufts Box nicely organizes my blogs in a folder on the cloud, which allows me to not store them locally on my computer, thus avoiding clutter. 

As you can see from the pictures above, the “Applet”, a program that connects two or more platforms together, ran successfully and a text file of my Automation blog is now stored in my Tufts Box.  

Since sites.tufts.edu is hosted by WordPress, creating an Applet with WordPress is very easy. However, I initially struggled with it because my link to the blog portion of my website was incorrect, and caused some confusion. If you double check to make sure the link is correct and grant IFTTT.com access to the platform you are connecting to, then there should be no issues. 

IFTTT.com offers a wide array of Applets that enables users to send information and perform actions across platforms, so I highly recommend it.