ENP 162 Wrap Up

Posted 1 Comment

So there I was last week, thinking about how to best summarize my experience in ENP 162 in a Blog while I was getting my kids ready for school one morning. My phone made a buzz for an incoming text message, but when I looked it was actually a notification from my Maps app.

The message said, “7 min to High St, Take Boston Ave, Traffic is light”. This was very peculiar to me, partially because it was the first time Maps had ever sent me a message, unprovoked. But the oddest part, is that I was literally about to drive my kids to their school on High Street, taking my usual route along Boston Ave.

As a Human Factors Engineer this strange message obviously excited me! Questions immediately swirled in my head.

Why did I receive the message? How did Maps know where I was going to drive before I even left? Was it a coincidence that I was about to drive to High Street when I received directions and suggestions from Maps to get to High Street, or just perfect timing?

Luckily, ENP 162 armed me with plenty of tools and knowledge to help answer these questions. This Blog will explore many of the Human-Machine System Design topics we learned this semester to explain the origins of my Apple Maps message.

GIS and GPS

Apple Maps is a Geographic Information System (GIS) which exploits the features and capabilities of our Global Positioning System (GPS). GPS is a system of satellites orbiting the Earth which instantaneously “feed” a receiver in your phone, computer, car your exact position, time, and velocity. A GIS such as Apple Maps simply utilizes GPS data in order to calculate directions and display positions and routes on a map or satellite imagery. So, Apple Maps uses GPS to find and display my position … but this is only part of the picture.

Big Data & Machine Learning

Working behind the scenes for Apple, often not visible to every day users, are processes known as Big Data and Machine Learning. These processes analyze extremely large sets of data in order to reveal patterns, trends, and associations. It is very likely that Apple Maps has been saving and storing my driving routes around Boston. A Machine Learning process analyzes this data, and in short order, can “learn” that I drive to High Street every morning at 7:30am to bring my kids to school.

Signal Detection Theory

Machine Learning and Big Data are advanced forms of Signal Detection Theory. Detection Theory is a means to measure the ability to differentiate between actual patterns (information-bearing) and random patterns (noise). Once again, Apple Maps uses very sophisticated technology/algorithms to predict my future driving activity by separating the random noise from the patterns in my driving history.

Automation and Alerts

Now that Apple Maps has “learned” my driving history and where/when I go, it appears the application’s Automation was utilized to deliver a sleek notification Alert to my phone. Of all the places I go, my daily trip to my kid’s school each morning at 7:30am is probably the most defined pattern, and Maps clearly identified this. So, I received my Alert and in a sense, should be better off now.

Internet of Things

This entire process of utilizing signals and sensors to analyze my driving data, and automation to deliver an alert to my phone, is a rather beautiful example of the Internet of Things (IoT). IoT is a system of interrelated computer devices with the ability to transfer and exchange data over a network without human-to-human or human-to-machine interaction. While it may seem mysterious to people who do not understand the foundation of Human-Machine System Design, IoT is happening constantly behind the scenes everyday and everywhere. We will rely on IoT capability more and more in the future as our technology expands.


I hope this blog helps shine some light on the many factors involved in complex Human Machine System Design. In the case of my “mysterious” Apple Maps message, we can peel back the onion and explore many of these concepts which are actively employed by many tech and communication companies now.

I thoroughly enjoyed ENP 162 and I would recommend the course to anyone who wishes to expand their horizons when it comes to understanding Human-Machine design topics. Thanks for reading.

-MPF

HAL 9000: Meanest ChatBot Ever!

Posted 2 Comments
How can I help you?

Well, HAL hasn’t exactly existed yet aside from movie magic and our imagination. In case you are not a big Sci Fi fan and have no idea what I am describing here, HAL 9000 (just called HAL, affectionately), is an all powerful super computer from Stanley Kubrick’s “2001: A Space Odyssey”, which is based on Arthur C. Clarke’s short story “The Sentinel”. In A Space Odyssey, which is one of my favorite space stories, HAL is the main computer controlling all ship systems for a group of astronauts on a secret mission exploring the potential for extra terrestrial life in space. HAL, who is so smart it is believed “he can’t make a mistake”, becomes increasingly paranoid of the astronauts, and their intentions when they question their task, so he kills them off one by one in the vast lonely expanse of space to protect the mission he is programmed to carry out.

While HAL certainly isn’t a ChatBot, at least in our current sense of the word, there are some striking similarities to our current day computer conversational friends. Both are simply computers, in a root form. Both are automated. Both interact with humans. Both either make decisions for us or strongly assist us in our decision making. Both further the impending doom of humanity. Just kidding about the last one, sort of!

The topic this begs a debate about is How Comfortable Are We With Allowing Computers To Carry Out Our Needs? When it comes to playing music, or searching for a phone number for your insurance company, most people are probably very comfortable with using ChatBots or similar conversational interface systems like Alexa or Google Home, albeit some more than others undoubtedly. But simple, non-threatening, tasks are easy for them to complete, right? We are currently pretty comfortable with the “low risk” region for them. But what about higher risk tasks, or actual decision making for us humans? “Alexa: plan my wedding” … no way right?!

Can’t say they didn’t warn us!

One thing’s for sure: the technology is only moving in one direction. Companies and industry are increasingly relying on computer AI technology to increase operational effectiveness and reduce demand on valuable Human Resources. And why not? Some 50% of telephone representative operator tasks could simply have been completed by a ChatBot, even at current tech level.

Our future, short and long term, will be more shaped by technology than ever before. It is only a matter of time before computers, ChatBots and Alexa, are making complicated decisions for us based on their ethical machine learning and programming. Personally, the way I feel about it is that looking forward is always the hardest part when dealing with computer AI and the potential reach computers can have. When looking back though, in hindsight, their trust always feels pretty safe actually. OR is that because that’s how they are already programming us to feel?!

-MPF

StarLink: Get Ready for lots of IoT (and lots of space junk)!

Posted Leave a comment
Starlink satellites orbiting Earth

In case you didn’t know, SpaceX is working extra hard to give every living/breathing and every non-living/non-breathing thing on Earth instant internet access with a constellation of ~50,000 satellites (no typo). Called StarLink, SpaceX hopes to offer this service to lucky North Americans as early as 2020. With approximately ~120 StarLink satellites already in orbit as of November, 2019, SpaceX is beaming internet across the U.S. as you read this (Elon Musk already texts and Tweets via StarLink, fyi).

Internet access for people is certainly a main priority for SpaceX in this endeavor, especially because high performance internet without geographic or physical borders will earn them massive global profit. And righteously so … 50,000 satellites in orbit is an impressive feat after all. But an equally significant priority is to boost up the support architecture for Internet of Things (IoT). IoT is a system of interrelated computing devices, machines, animals, and humans with the ability to transfer data and communicate via a network without human-to-human or human-to-computer interaction. Think of regular street corner in NYC … the IoT concept is for hundreds of devices in cars, on people, on buildings, on infrastructure, communicating on networks simultaneously.

The issue with IoT is that networks rely on cellular, wifi, or Bluetooth service in order to communicate. Yet 90% of the surface of the Earth is not serviced by these communication mediums, such as over open ocean or farmland. Shippers tracking ships at sea, farmers monitoring their crops, and government monitoring civil infrastructure requires cooperation with costly traditional satellite communication providers such as Iridium ($$). Enter the solution … lots of tiny, affordable, expendable, satellites. Lots of them. Many Starlink satellites will actually range from the size of a baseball to a shoe. Pretty small!

These tiny satellites will fill the sky and provide the communication architecture for the network of IoT devices. Most of these satellites won’t have the bandwith for gaming or web browsing. They are designed instead for the tiny bursts of data that agriculture, infrastructure and asset-tracking IoT devices produce.

While internet-for-all is certainly an exciting prospect and game changer for the grand IoT landscape (just imagine no communication bounds for you or our devices), there will certainly be unforeseen drawbacks.

Apparently the astronomers are already upset with SpaceX, after a rather small string of 60 of their satellites polluted the night sky with ambient light disrupting their deep space telescopes for about an hour this month. The time-lapse video is pretty cool too (left).

If a “small” train of 60 satellites is enough to annoy our tech loving astronomers, then where will we stand with 50,000+ satellites overhead? What will be the impacts to our skies, and radio spectrum, with clutter exponentially greater than we can currently comprehend? The gains of unlimited internet and communication are surely real, but so are the risks.

-MPF

GPS: Do We Realize How Lucky We Are?

Posted 1 Comment

GPS may not be the world’s first satellite-based global radio-navigation system (Transit was), but it is by far the most successful and widely used system of its kind today. While you may consider the steep development costs (~$20B) and annual maintenance costs (~$1B) to be overly pricey, all feelings of buyer’s remorse (taxes…the horror, the horror) should instantly evaporate with the awareness of the unforeseen success and realized global economic benefit of GPS (>$300B per year benefit for >2B users). That’s quite a large profit margin isn’t it? You’re welcome!

GPS, known as Global Position System, was developed by the U.S. Department of Defense in the early 1970’s (declared “operational” in 1995) to facilitate global campaign and military theater operations. The byproduct we know as civilian GPS was a completely unforeseen and unexpected derivative of the military program. Today, billions of users around the world use GPS daily. It has changed our world and the way we live in it.

Meter-level accuracy … from 20,000 kilometers away?

The capability, and accomplishment of the GPS program, impresses me every time I think about it. In a nut shell, GPS gives every civilian user the ability to precisely determine their position within a meter from anywhere on Earth while using a receiver as cheap as $1. And this service is completely free. FREE!

This Goliath was the first GPS receiver, so to speak. The device now lives on a microchip inside your smartphone, computer, car, etc.

While we owe this accomplishment to many, many people who advanced the science behind the technology (Einstein, to name at least one, giving us time and gravitational Relativity theory), their collective achievements successfully placed 30+ satellites 20,000 kilometers above Earth’s surface (traveling at 4 kilometers per second in orbit) capable of maintaining synchronized time within 10 nanoseconds of each other.

Time, you see, is the heart of GPS. Accurate time (read: very, very accurate). GPS uses the method of Trilateration to measure the time it takes for a radio signal to travel from a satellite to your $1 receiver on Earth. By measuring these times from multiple satellites your receiver converts the times to distances and “triangulates” them to determine your physical position. But, since radio waves travel so fast (speed of light), we need very accurate atomic clocks in the satellites to ensure the time stamps on the signals are incredibly precise. After all you want it to be accurate, right? And this all occurs aboard satellites in outer space, 20,000 kilometers above, zooming over the Earth at 4 kilometers per second. Mind boggling.

The Lord Giveth, and The Lord Taketh Away

Forget not that the U.S. military owns, and operates, all aspects of GPS. In the late 1990’s the U.S. employed Selective Availability in the constellation of satellites, intentionally degrading the public signal accuracy for purposes of national security (in fear, mostly). This practice has been decommissioned since 2000 by Presidential Order (Bill Clinton), but make no mistake that the U.S. military retains the ability to degrade or shut off GPS if it is required for national security. They own it, but we (every person on Earth) reap the benefits.

No Free Lunches, Well Except for GPS

I keep mentioning that GPS is free, but this should truly impress you, especailly in our current day society. Regardless of what country you are standing in or come from, you can turn on your $1 receiver anywhere on Earth and instantly know your position. Nearly every shred of new technology going forward needs to be able to operate in tandem with GPS, as our desire to know and apply our position instantly is no longer a wish, but a requirement of the day.

-MPF

When it comes to Emotive Display perfection …

Posted Leave a comment

Look no further than the King (and Queen) … Disney!

My children (currently 8 years, 6 years, and 4 years) have been faithful Disney [and Pixar] followers their entire, albeit short, lives. Part of this devotion is probably owed to my wife’s equal life long loyalty to Disney.

I am a bit of a different breed. I didn’t grow up on the Disney “brand” and only recently immersed in the Disney lifestyle since my children were born. When we lived in Florida we frequented Disney World with our kids and I had the chance to see firsthand how Disney has mastered the ability to connect not only with youth, but people of all ages.

Part of this triumph is undoubtedly owed to Disney’s mastery of music and story telling. A true common trait among each of their theatrical hits is great story telling and songs that easily get stuck in your head. But, while these qualities may be the bones of each animated story, the glue that holds it all together is Disney’s ultra-fine Emotive Display micro-attention to detail. This focus ranges from robots (like Wall-E and Eve above), to animals, to inanimate objects.

How could you not love him?!

Disney does a miraculous job making the emotional decisions for their customers, giving them very little, if none at all, room to make their own interpretations. And this is surely supported by their ability to drive their brand with precise Emotive Displays for each character and scene.

I asked my 6 year old son what he likes about Wall-E (above). Of course he answered that he’s “so cute” (just look at those puppy dog eyes!), but he also described how he personally feels the way Wall-E feels sometimes, describing Wall-E like he has all the parts of a human’s soul with emotions and feelings yet with none of the physical human parts. After all Wall-E is just a robot. It amazes me how Disney can build a character with complete perceivable and relatable human emotions yet void of physical human likeness. And, if not already complicated enough, place the story setting in Outer Space where it is quite challenging to master the animated “physics” of that environment. Quite impressive!

Disney mastered the physics of underwater environment simulation paired with emotionally convincing Emotive Displays. Finding Nemo is the highest grossing DVD ever.

And occasionally we see how fine the line it is when companies try to strike the right balance with their Emotive Displays. Paramount and Sega took major fire earlier this year when they released a trailer for their upcoming Sonic the Hedgehog movie. Their fanbase and preview crowd gave them a 200 mph slap in the face for designing a “creepy” looking Sonic with strange Emotive Displays (fail!). Sonic’s human-like teeth particularly turned people away. Animation redesign efforts to fix the flop cost the companies millions of dollars and delayed production.

Disney certainly deserves much credit for successfully winding up on the right end of the balancing act time and time again while avoiding the “uncanny valley” (see above). Artificial Affective and Emotive Displays will become more common in our every day lives particularly with the future implementation of Social Robotics. It will be prudent to follow the Disney model for success in these endeavors.

-MPF

Machines Can Now Dream (thank you Google)!

Posted Leave a comment

Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. The function and application of Neural Network are at the heart of Machine Learning.

Advances in Machine Learning have developed our current day Consumer Industry and changed the way we shop and interact in this industry. Social corners from Social Media (think Facebook and Twitter) to Finance (think AmEx monitoring your charge card for fraud activity) have developed Machine Learning to make choices and provide services for the consumer.

And now Google, as they often do, has taken Machine Learning one step further. They let the network dream and develop it’s own thoughts and take it’s own direction.

Like other Neural Network developers, Google trained a network by showing it many examples (pictures) of what they wanted it to learn in the hopes the significant features that define an object will be retained by the network. For example a tire needs to be round, can be any color and size, and have multiple different textures.

Zoom in to see all the animals the network “sees” in the clouds

Google trained one particular network to recognize animals, among many other images. They then created a feedback loop, telling the machine “Whatever you see there, I want more of it!”. By introducing higher-level layers the network can identify complex features. When showing the network an image of a big sky speckled with cloud formations, the network “saw” images of animals in the clouds, like a fish for example, much like a person would. Using the feedback loop, the network would then make the cloud look more like the fish it “saw” on each iteration until a highly detailed image of a fish appeared.

Such a creative network it is!

The network can dream – and the results are fascinating. Even a simple network can be trained to gaze at the clouds like a person and “See” images that we recognize.

-MPF

Source: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

Can the Humans of the Future escape the Orwellian nightmare?

Posted 1 Comment

The Chinese government’s Social Credit System (SCS), first announced in 2014, is supposed to be fully operational by 2020. The pilot program has been in effect for years and already affected millions of Chinese citizens. And yes, the program will be mandatory. If you haven’t heard of SCS then read this, this, and this.

The SCS is a living / digitally-implemented / secretive government run program meant to rate the “credit” or “reputation” of all citizens and businesses. It vows to “make trustworthy people benefit everywhere and untrustworthy people restricted everywhere”. The Chinese government will (already does) use this “credit” score to make a plethora of personal, financial, and social judgments on their citizens and sanction them when it chooses. With Big Data Analysis Technology at the roots of the program, it is an undeniable form of mass surveillance with exceptional power to humiliate, embarrass, and in many cases destroy the lives of citizens while benefiting the lives of those the regime chooses. The system is currently being managed by regional and local governments or by private firms holding massive amounts of personal data.

Those citizens who don’t pay bills on time or default on loans are dead in the water. Which shouldn’t be a big surprise. But what should surprise you is the government will be watching all sorts of other behavior and misdeeds to rate it’s citizens too. Those who are observed loitering, smoking in non-smoking designated areas, playing too much video games (yes, I’m serious), spending frivulously, walking a dog without a leash, or posting “Fake news” on social media will have their credit scores negatively affected.

Checkout this Twitter video (2018) of a government message played to passengers on a Beijing high speed train:

Negative Social credit scores can have all kinds of impacts on daily Chinese life, such as losing privileges to purchase airline tickets or hotel rooms, auto denial to apply for credit cards, lower internet speeds (yes, serious, again), and auto denial to apply for higher education like college for you or your children.

Jinan, a city in eastern China, has been enforcing a social credit system for dog owners since 2017. It is compulsory for owners to register, and they are given a license with 12 points. Actions such as walking dogs with no leash or excessive barking get your points taken away. If your points are exhausted then your dog(s) is taken away and courses and tests are required to reapply for a new license.

Given how bold the launch of this program has been, it isn’t far fetched to envision the Chinese government exploring facial recognition or eye scanning tech to patrol the landscape. Even simple misdemeanors like jaywalking could “automatically” reduce your credit score. OR, Big Data watching your spending habits and increasing your score when you buy things the regime likes, such as diapers, and reduces your score when you purchase video games or alcohol….the things the regime doesn’t like.

It is undeniable that Americans are also being divided and categorized into billions of data sets by our own Big Data in the United States. While we can hope that our government doesn’t have sinister intent like China, only time will tell what direction we take.

China may be a lost cause at this point….. But what do you think…..will the Future Human feel a similar sense of natural freedom like we feel today or will their reality be affected by the constant, underlying judgment of the digital State?

-MPF

Signal Detection for Land Based Search and Rescue (SAR)

Posted Leave a comment

Signal Detection Theory is a theoretical framework dating back to the 1800’s. However it was not until the 1950’s before the field was thrust into mainstream psychology research initiatives by Peterson and Birdsall. Their publication “The Theory of Signal Detectability” was published in 1954 in support of University of Michigan research and was primarily focused on signal detection theory relating to electronic radar applications. The underlying objective involved manipulating criterion to either increase or decrease true and false detections of a radar signal.

The “business” of Search and Rescue (SAR), whether land or aerial based, has benefited greatly from signal detection theory concepts over the past 50 years. Our present day searchers, particularly those performing aerial searches from airplanes or helicopters, conduct searches in a systematic way in order to optimize Probability of Detection (POD). The notion of POD can be complex, however the basic fundamentals involve increasing the Coverage Area of the search area and optimizing the “detectability” of the search platforms (planes, helicopters, etc). The “detectability” is determined by calculating the Effective Sweep Width of the search platform. These are standard tabulated values derived from actual real world experimentation. Most simply stated, we define the Effective Sweep Width of the search platform (measured in yards or nautical miles typically) as the equal probability the platform will identify the search object as it will “miss” the search object. This is important for search planners, because if the environmental variables, aircraft altitude/speed, sensors employed, and search object size all affect the Effective Sweep Width, then search planners can assign search platforms optimal altitudes, track spacing, and speed to increase Probability of Detection. It’s Science!

Let’s take a look at real world experimentation using signal detection theory concepts to calculate search Effective Sweep Width (detectability). In 2002 the Potomac Management Group prepared a detailed report for the National Search and Rescue Committee (NSRC). In their research, they conducted multiple experiments using orange gloves and black trash bags as search objects in a densely wooded area in West Virginia. Their experiment generated numerous sequences documenting the distance search and rescue personnel were able to the identify or positively determine absence of the search object.

In the below chart “Orange Glove Half Sweep Width Estimator”, you can see how the experimenters determined the distance where the probability of detection and the probability of non-detection are equal. In this case of the orange glove hidden in dense wooded forest, the half effective sweep width is 16.27 meters. In the event of a similar land search, search planners would use this data to effectively space out searchers to optimize the POD and Coverage Area for a similar object and search criteria. I.e. it would not be optimal to have searchers directly next to each other covering the area nor would it be effective to space them out beyond 15 meters.

In the case of the black garbage bags (fitted in shape of human), the effective sweep width was much larger since the object was easier to spot by the searchers. I.e. they could identify it from farther away and “non-detections” also occurred further away than the orange glove sequences.

These concepts and the tabulated data which are generated are of particular importance to search planners employing aerial search platforms. Aerial platforms are often limited resources and effectively optimizing their performance and coverage is crucial to effective search operations. The below Table is an excerpt from the U.S. Coast Addendum to the National Search and Rescue Supplement. While many tables similar to this one are included in the manual, this particular table provides Search Planners a visual Sweep Width depending on variables such as size of Search Object, meteorological Visibility, and aircraft search Altitude.

Source: A Method for Determining Effective Sweep Widths for Land Searches

Source: U.S. Coast Guard Addendum to the National Search and Rescue Supplement to the International Search and Rescue Manual

-MPF

Intelligent Transportation

Posted 2 Comments

What if we could could make a safer world and eliminate traffic fatalities? Or increase traffic efficiency thereby optimizing resources making our world Greener?

These goals are the vision of the DoT’s Intelligent Transportation System (ITS) initiative. The two priorities of ITS are to realize Connected Vehicle (CV) Implementation and Advance Automation. My blog will explore the basics of CV technology which is a critical stepping stone to advancing transportation automation capabilities of the future.

So what is CV tech? Connected Vehicle tech aims to enable safe and interoperable networked wireless communications among vehicles, infrastructure and people utilizing Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructor (V2I) applications. These applications have numerous functions, however the overall initiative is to alert the driver of unsafe conditions and prevent collisions with other vehicles and pedestrians.

NYC is one of three locations where the DoT is establishing an extensive pilot program to conduct research and development. The key concept of the NYC CV pilot program is to equip large fleet vehicles with CV tech to advance towards NYC’s Vision Zero goal – to eliminate injuries and fatalities due to traffic crashes. The basic system architecture is comprised of hundreds of road side units (RSU) installed in high density traffic areas communicating with thousands of aftermarket safety devices (ASD’s) installed in fleet vehicles such as taxis, buses, UPS and sanitation vehicles through Dedicated Short-Range Communication (DSRC) networks.

This image displays the planned permanent installation of RSU infrastructure in a high density traffic area in Brooklyn/Manhatten and the corresponding streets and roads where the technology will apply.

Now lets discuss the basic functionality of the pilot CV tech. What specific application (V2V or V2I) functions will assist drivers and pedestrians? Here’s a quick and basic breakdown the real-time alerts and warnings which will be immediately displayed (visually and/or aurally) to drivers to reduce collision potential:

The Emergency Break Light (EEBL) and Forward Crash Warnings (FCW) target precrash scenarios on rear-end crashes such as alerting driver that Lead Vehicle is decelerating > .4g.

Blind Spot (BSW) and Lane Change (LCW) warnings alert driver of unsafe lane change to reduce side-swipes.

Red Light Violation (RLVW) warns the driver if a vehicle approaching the same intersection will violate their red light.

Speed Compliance (SC) alerts driver of violation.

Oversize vehicle (OVC) alerts driver of unsafe road use (trucks).

The NYC CV pilot program has established extensive performance measures to evaluate the tech during various phases of implementation. Here is a brief excerpt of the V2I Performance Measures:

The CV pilot program, still in initial design and development, has numerous phases with countless obstacles to still overcome… some of which are related to the technical challenges of operating GPS in an urban canyon environment, while others relate to the Privacy and Security issues of the government gathering and storing large (colossal and enormous) amounts of data on it’s citizens and their movements. Without question a major trial for NYC (and all other cities to follow) will be earning the “buy in” and trust of the public before these technologies are implemented. NYC has a sullied history of widely abusing Fourth Amendment protections and the future will be no different unless proper and thorough preparation is made. Where will traffic data be stored? Who can access the data? How will citizens be protected?

Assistive technology and automation have limitless capability to make our world safer by protecting our people, resources and environment. However with the application of these technologies come the obvious challenges of balancing our safety and freedom. After-all a highly restrictive Surveillance State complete with a zero traffic fatality rate, which may unequivocally improve public safety, would be a major social deterioration from our current condition. Nonetheless I am very excited to see our world (and roads) be safer in the near future!

-MPF

Sources:

DoT Intelligent Transportation

NYC CV Pilot Program

NYC Connected Vehicle Project

Hello!….& Effects of automation on military aviation capability

Posted 2 Comments

Greetings! My name is Mike Feltovic and I am a @USCG helicopter pilot by trade and currently studying Human Factors Engineering at @Tufts (Go Jumbos!). Very happy to be here… and happy to be discussing HFE topics with you over the coming months!

To open things we will be discussing how “automation” has impacted and evolved human-(aviation)machine-systems in military aviation over the last ~50 years. Let’s break up the large 50 year period into three smaller ones…”Early years”, “Pre-Vietnam era”, and “Current Day”.

Robert Mason elegantly describes the major evolution over the first two periods in his Vietnam aviation novel “Chickenhawk” (I highly recommend it if aviation interests you). In the “Early years”, Mason explains it was U.S. Army policy that helicopter pilots were limited to 2 hours of flight time per day (non-War time). This was due to the physically (and mentally) intensive motor skills required to safely operate the flight controls of early helicopters. There was no mechanical automation, and flying required pilots to constantly move/adjust the flight controls during the flight. This made the job extremely tiring!

Diagram 1___________Diagram2

Then came a rudimentary advancement in the “Pre-Vietnam era”: the mass development of “friction control” on helicopter flight control systems. Diagram 1 above shows a basic layout of helicopter flight controls, and note the “friction knob” on the collective control lever picture in Diagram 2. This knob allowed Vietnam era pilots to increase the friction on a particular flight control system… allowing the pilot to temporarily remove their hands from the controls (which would remain rigidly in place and not flop over like old models) in order to complete other tasks such as eating, resting, tuning a radio or nav system, or scratching an itch. With these advancements, the Army updated their policy and allowed pilots to fly longer missions … no more than 4 hours per day. That’s a 100% increase thanks to automation!

Mike with the Royal Canadian Air Force CH-149 Cormorant

Now, in our “Current Day”, engineers and designers have exploited automation to a degree which would be unfathomable to early pilots. In my experience flying Search and Rescue with the Royal Canadian Air Force (RCAF) in the CH-149 Cormorant (a state-of-the-art helicopter complete with loads of computer/mechanical automation), RCAF policy limited us to 15 hours of continuous flight time when prosecuting life saving missions. And…the policy further stated if you made extensive use of automation during the flight operations, you could extend your crew day to 18 hours of flight time. 18 hours! That’s a big difference from the original 2 hour Army limit in the “Early Days” when no automation existed!

Automation in military (and civilian) aviation has come a long, long way as you can see. However, with all the benefits of automation come obvious drawbacks. I hope we can discuss that soon so stay tuned!

-MPF