Saturday, February 21, 2015

Not All Information Is Created Equal

Accurate information is difficult to obtain in the early stages of a disaster.  There are many reasons for this, including:
 
... the difficulty of gathering information, especially during impact, and the inaccessibility of parts of the impact area.
... reports based on partial observation or knowledge of the disaster's impact.
... the natural tendency to translate our horror or distress at the impacts of the incident into reports that are symbolic, rather than factual.
... the equally natural tendency of some of those reporting to fill in missing details with what they believe may have happened.
... and the desire of some to be helpful or even important by making reports that do not reflect any reality but there own.
 
So how do we evaluate which reports are valuable and which are worthless?  And how can we make decisions based on them?
 
The first rule may seem counterintuitive if we are trying to sort good from worthless - do not automatically discard any report received.  In a search for a missing aircraft in the Florida Panhandle in the early 1980s, a ground team interviewing along the route of flight was having no luck.  And then, on a front porch littered with alcoholic beverage containers, an obviously intoxicated "witness" reported seeing a pink aircraft circle a radio antenna tower and then head back in the direction from which it had come.  The pink aircraft in this case seemed intuitively to be a first cousin to pink elephants.  However, radar track data the next day showed that the missing aircraft had in fact been in that location, had circled twice, and had turned back on its original track.  Oh, and the pink thing - a red sunset and a white aircraft ...
 
So we have to look at two key components in any report: (1) the reliability of the reporter and (2) the level of confirmation of the report.  You can use any scale you want to make this evaluation, but we use a simple color coded scale.  If you plot reports the color codes help visually depict what the words represent.  And the colors are a quick way of capturing the assessment in any culture which uses red, yellow, and green traffic lights.
 
The reliability of the reporter is based on history and reflects the probability that reliable sources tend to make reliable reports.  Our scale is:
 
GREEN - highly reliable source - a source likely to supply as accurate information as possible about this subject.
YELLOW - source of medium reliability - a generally reliable source, but which may have only partial understanding of or limited access to the full range of information.
RED - low reliability source - a source whose characteristics or record indicates that the information has a probability of being incorrect or even deliberately misleading.
 
The level of confirmation reflects the probability that a number of reports with the same or similar content is more likely to approach reality, and be more actionable, than a single report.  Our scale is:
 
GREEN - confirmed - information confirmed by a number of other sources or by reliable technological methods.
YELLOW - partly confirmed -  information confirmed by a single additional source or unconfirmed, but consistent with conditions or limited information from other sources.
RED - unconfirmed - information that is not confirmed by or contradicts other information received.

As more reports are received the level of confirmation of any single report will change.  In the case of the drunk on the front porch, the initial evaluation was RED/RED.  However, when technological evidence that the report was true emerged the report became a RED/GREEN. 
 
RED/RED information should be approached with skepticism. However, it should be logged and not ignored. RED/RED is true enough times to justify recording it in case other confirming reports are received or if no other information emerges. At the same time GREEN/GREEN is not guaranteed to be true.  Even multiple reports by the best sources miss the mark from time to time under the conditions of the event.

Wednesday, February 4, 2015

Update on Our Open Badges

Our open badge project is moving forward - we now have 8 badges credentialing specific training and operational experience.  As we have been developing these badges, we have effectively defined a future architecture for our badges with 5 categories:

(1) general training badges - tied to our four level operational qualification program - ranging from entry to advanced level.

(2) specialized incident commander training - tied to completion of our three levels of training for incident commanders in The Virtual Emergency Operations Center - ranging from basic to advanced level.

(3) skills credentialing - verifying specific skill performance critical to operation of The Virtual Emergency Operations Center.

(4) service and experience - in 100 hour increments in actual disasters or exercises for our staff and in varying increments of numbers of reports for our disaster observer micro-volunteers.

(5) professional certification - the Certified Technologist in Virtual Emergency Management (CTVEM) - capstone level integrating elements of the other four badge categories.

Tuesday, February 3, 2015

On The Importance Of Disaster Exercises

This is an update and expansion on an earlier blog post on exercises based on our overall experience in 2014.

We  do some level of activation numerous times a year.  When we look at the data from 2014 we participated in 17 events, including:
  • Responses for actual events - ranging from upgrading our alert status to Communications Watch to full Activation.
  • External exercises - supporting the organizations to which we provide services.
  • External day of ... exercises - the Great Shake Out is an example, drills that increase awareness of specific hazards and that allow us to practice our operations in the hazard scenario.
  • Internal exercises and drills - intended to work on specific technical issues or to train our staff either in specific response actions or in responding to different types of disasters.
  • Orientation seminar exercises - in our case as presentations of our capabilities to potential supported organizations or agencies.
Why do we, as volunteers, want to do that much work?  Each exercise that we designed required a full work-up, drafting a master sequence of events list, preparing inputs, doing an after action review, etc.  Some organizations count themselves lucky to do one tabletop exercise a year.

We respond to actual events because they are actual, and the organizations we support want our assistance.  So the simple answer to why is so that we will be able to respond when called.  But it is more complex than that, and the answer lies in what exercises and drills do.

One categorization of exercises divides them into discussion based and operations based events.  The discussion based events include:
  • Orientation Seminars - to acquaint participants with new plans, concepts, resources, etc.
  • Workshops - to achieve a specific goal or develop a specific product, such as a plan or procedures.
  • Tabletop Exercises - to assess plans and policies and develop common approaches to problems.
  • Games - to explore decision making in a team environment and to evaluate the results of decisions.
The operations based events include:
  • Drills - testing a specific function by actually performing it.
  • Functional Exercise - staffs do actual work but movement of field units is simulated to test and evaluate command center operations.
  • Full Scale Exercise - with operations by all components of the system to implement plans, procedures, agreements, etc. developed in other exercises.
From this list we use orientation seminars (as part of new member Initial Qualification training), drills (to work specific functions in The Virtual Emergency Operations Center), and functional exercises (for a full function response to a scenario).

There is a long standing division in the exercise literature about what exercises are about.  Some argue that they are for training - you hold an exercise to train people to work as a team in an otherwise unfamiliar environment.  Others argue that training is something you  do in a classroom or training facility, and that exercises are only to evaluate whether the training has done its job and the procedures are appropriate.  We view this distinction as an exercise in pedantic pettifoggery.   Perhaps if you have a multi-million dollar budget, a large exercise staff, and essentially unlimited time, it might make sense to build firewalls around the purpose of an exercise.  But as a small non-profit organization every exercise has to train and test.

But it is more than simply training and testing.  One of the principles of training is that trainees do well things that are both practiced and recent.  Hint - this applicability of this principle does not stop when initial training ends.  Practice with repetition of key tasks builds both speed and quality of execution.  This is true in sports (a major league baseball pitcher who only threw the ball in games, but never in practice would not be a pitcher very long, no matter how many balls he threw in Little League, high school, college, and the minor leagues), or for that matter, any physical skill.  It is also true in mental skills such as decision making.  

But practiced skills must also be recently used to be at their top form.  I once had a job where I controlled tactical fighter aircraft in air combat training.  I was hand-picked for the job because of my proficiency as a ground controlled intercept controller.  I controlled thousands more tactical intercepts with more aircraft involved under higher pressure than the average controller in a tactical control unit.  And yet on Monday morning, having been off the radar scope for two days, I was noticeably less proficient.  The lesson of the war story is that if you work one disaster event a year (real or simulated) you will be less proficient the further you are from that one event, and certainly less proficient than if you work 17.