Tuesday, May 5, 2015

The Basic Readiness Course

The first edition of the Legion of Frontiersmen's Basic Readiness Course is almost ready for use.  Final draft copies are out for pilot testing, and work is on schedule to have the Internet testing site up shortly.  We expect that the new course will be available for all members of Countess Mountbatten's Own Legion of Frontiersmen by 1 June.

This is an important development.  The Legion's Constitution is very clear that the primary mission is to provide assistance to civil authorities in the countries in which we have units in the case of disaster.  The specific services Legion units can perform depend on local needs, but every Frontiersman should have a basic understanding of what a disaster is, how to respond to it, and basic common skills.  The course aims to achieve that goal by providing a common text for unit level training or for self-study, combined with an online test hosted on a modern learning management system. 
 
From our standpoint as a Disaster Management Group whose members operate solely in the virtual environment, the course contents are still important.  They provide our view on disaster response, as well as useful knowledge that any informed person should have.  Completion of the course does require a current basic/standard first aid card from a recognized training source.  This training is supplemental to our existing four level training program designed to qualified members for online emergency operations center duties.

Thursday, March 5, 2015

How Much Information?

We know that it is possible to have too little information for disaster assessment and decision making.  But is it possible to have too much?  Don't we want to know everything that is happening, in as much detail as possible? 
 
The answer depends on who you are, where you sit, what you do, and for what purpose you need the information.  For example, if you are managing response by a public utility in a small sized city or large town, you really do want to know if there is an outage impacting one block in one neighborhood, and you want to know the house numbers involved.  That information is important to getting the right repair crew to the right address at the right time (considering all the other demands for service) to fix the problem.
 
But as we move up the chain of increased responsibility for broader problems:
 
... if you are the chief elected official of the jurisdiction, you may want to only know the general impact and the general neighborhood,
 
... if you the person responsible for assistance to utilities in the state emergency operations center, you only want to know specifically about this outage if there are unusual technical problems involved - otherwise the total outages in the city and whether assistance is needed from other resources is enough information.
 
... and if you are the governor you just want the numbers, how many outages in the state (unless there is a really good human interest story here that can be used to make a policy or budgetary point).
 
Each of the people involved has different information needs for fine details, which generally get increasingly less relevant as you go higher up the chain of responsibility and authority.  As you go further up the chain the data is consolidated with data from other locations to form a more general and broader picture of the event.  However, there will be incidents in which fine detail becomes important for any of a wide variety of issues.  These may be technical, resource allocation, media attention driven, or political issues that can disrupt the response.  In these cases the levels that deal with details have to identify and highlight the problems for attention by higher levels of management, and where the higher levels have to be able to reach down the chain to recover the detailed data.
 
So why not simply push all the house numbers in our case up to the Governor as a routine matter?  That way he has everything he or she needs when it is needed.  The truth is that too much information slows decisions, and may result in wrong decisions by individuals without specific training in the technical details.
 
And there is a related problem.  The reality is that getting more and more information until you have everything is just not very easy to do.  Some of this is initial uncertainty; some of it is the size of the problem.  The key is to decide when good enough is good enough, and to realize that getting to perfect is time and resources intensive while not making a major impact on the quality of the decisions. 
 
All of this means that at each level we have to define the key elements of information the decision makers at that level will need.  Then the emergency management system has to be able to provide that information in a format that the decision maker can easily use.  In some cases this will be specific numbers; in other cases a RED-YELLOW-GREEN color coded continuum is not only adequate but preferable.  Although the old Homeland Security color coded terrorist threat conditions proved to be an inefficient irritant, the color coding is not inherently bad.  Its application must be tied to decision needs, as must all of the information presented for decision making.
 

Saturday, February 21, 2015

Not All Information Is Created Equal

Accurate information is difficult to obtain in the early stages of a disaster.  There are many reasons for this, including:
 
... the difficulty of gathering information, especially during impact, and the inaccessibility of parts of the impact area.
... reports based on partial observation or knowledge of the disaster's impact.
... the natural tendency to translate our horror or distress at the impacts of the incident into reports that are symbolic, rather than factual.
... the equally natural tendency of some of those reporting to fill in missing details with what they believe may have happened.
... and the desire of some to be helpful or even important by making reports that do not reflect any reality but there own.
 
So how do we evaluate which reports are valuable and which are worthless?  And how can we make decisions based on them?
 
The first rule may seem counterintuitive if we are trying to sort good from worthless - do not automatically discard any report received.  In a search for a missing aircraft in the Florida Panhandle in the early 1980s, a ground team interviewing along the route of flight was having no luck.  And then, on a front porch littered with alcoholic beverage containers, an obviously intoxicated "witness" reported seeing a pink aircraft circle a radio antenna tower and then head back in the direction from which it had come.  The pink aircraft in this case seemed intuitively to be a first cousin to pink elephants.  However, radar track data the next day showed that the missing aircraft had in fact been in that location, had circled twice, and had turned back on its original track.  Oh, and the pink thing - a red sunset and a white aircraft ...
 
So we have to look at two key components in any report: (1) the reliability of the reporter and (2) the level of confirmation of the report.  You can use any scale you want to make this evaluation, but we use a simple color coded scale.  If you plot reports the color codes help visually depict what the words represent.  And the colors are a quick way of capturing the assessment in any culture which uses red, yellow, and green traffic lights.
 
The reliability of the reporter is based on history and reflects the probability that reliable sources tend to make reliable reports.  Our scale is:
 
GREEN - highly reliable source - a source likely to supply as accurate information as possible about this subject.
YELLOW - source of medium reliability - a generally reliable source, but which may have only partial understanding of or limited access to the full range of information.
RED - low reliability source - a source whose characteristics or record indicates that the information has a probability of being incorrect or even deliberately misleading.
 
The level of confirmation reflects the probability that a number of reports with the same or similar content is more likely to approach reality, and be more actionable, than a single report.  Our scale is:
 
GREEN - confirmed - information confirmed by a number of other sources or by reliable technological methods.
YELLOW - partly confirmed -  information confirmed by a single additional source or unconfirmed, but consistent with conditions or limited information from other sources.
RED - unconfirmed - information that is not confirmed by or contradicts other information received.

As more reports are received the level of confirmation of any single report will change.  In the case of the drunk on the front porch, the initial evaluation was RED/RED.  However, when technological evidence that the report was true emerged the report became a RED/GREEN. 
 
RED/RED information should be approached with skepticism. However, it should be logged and not ignored. RED/RED is true enough times to justify recording it in case other confirming reports are received or if no other information emerges. At the same time GREEN/GREEN is not guaranteed to be true.  Even multiple reports by the best sources miss the mark from time to time under the conditions of the event.

Wednesday, February 4, 2015

Update on Our Open Badges

Our open badge project is moving forward - we now have 8 badges credentialing specific training and operational experience.  As we have been developing these badges, we have effectively defined a future architecture for our badges with 5 categories:

(1) general training badges - tied to our four level operational qualification program - ranging from entry to advanced level.

(2) specialized incident commander training - tied to completion of our three levels of training for incident commanders in The Virtual Emergency Operations Center - ranging from basic to advanced level.

(3) skills credentialing - verifying specific skill performance critical to operation of The Virtual Emergency Operations Center.

(4) service and experience - in 100 hour increments in actual disasters or exercises for our staff and in varying increments of numbers of reports for our disaster observer micro-volunteers.

(5) professional certification - the Certified Technologist in Virtual Emergency Management (CTVEM) - capstone level integrating elements of the other four badge categories.

Tuesday, February 3, 2015

On The Importance Of Disaster Exercises

This is an update and expansion on an earlier blog post on exercises based on our overall experience in 2014.

We  do some level of activation numerous times a year.  When we look at the data from 2014 we participated in 17 events, including:
  • Responses for actual events - ranging from upgrading our alert status to Communications Watch to full Activation.
  • External exercises - supporting the organizations to which we provide services.
  • External day of ... exercises - the Great Shake Out is an example, drills that increase awareness of specific hazards and that allow us to practice our operations in the hazard scenario.
  • Internal exercises and drills - intended to work on specific technical issues or to train our staff either in specific response actions or in responding to different types of disasters.
  • Orientation seminar exercises - in our case as presentations of our capabilities to potential supported organizations or agencies.
Why do we, as volunteers, want to do that much work?  Each exercise that we designed required a full work-up, drafting a master sequence of events list, preparing inputs, doing an after action review, etc.  Some organizations count themselves lucky to do one tabletop exercise a year.

We respond to actual events because they are actual, and the organizations we support want our assistance.  So the simple answer to why is so that we will be able to respond when called.  But it is more complex than that, and the answer lies in what exercises and drills do.

One categorization of exercises divides them into discussion based and operations based events.  The discussion based events include:
  • Orientation Seminars - to acquaint participants with new plans, concepts, resources, etc.
  • Workshops - to achieve a specific goal or develop a specific product, such as a plan or procedures.
  • Tabletop Exercises - to assess plans and policies and develop common approaches to problems.
  • Games - to explore decision making in a team environment and to evaluate the results of decisions.
The operations based events include:
  • Drills - testing a specific function by actually performing it.
  • Functional Exercise - staffs do actual work but movement of field units is simulated to test and evaluate command center operations.
  • Full Scale Exercise - with operations by all components of the system to implement plans, procedures, agreements, etc. developed in other exercises.
From this list we use orientation seminars (as part of new member Initial Qualification training), drills (to work specific functions in The Virtual Emergency Operations Center), and functional exercises (for a full function response to a scenario).

There is a long standing division in the exercise literature about what exercises are about.  Some argue that they are for training - you hold an exercise to train people to work as a team in an otherwise unfamiliar environment.  Others argue that training is something you  do in a classroom or training facility, and that exercises are only to evaluate whether the training has done its job and the procedures are appropriate.  We view this distinction as an exercise in pedantic pettifoggery.   Perhaps if you have a multi-million dollar budget, a large exercise staff, and essentially unlimited time, it might make sense to build firewalls around the purpose of an exercise.  But as a small non-profit organization every exercise has to train and test.

But it is more than simply training and testing.  One of the principles of training is that trainees do well things that are both practiced and recent.  Hint - this applicability of this principle does not stop when initial training ends.  Practice with repetition of key tasks builds both speed and quality of execution.  This is true in sports (a major league baseball pitcher who only threw the ball in games, but never in practice would not be a pitcher very long, no matter how many balls he threw in Little League, high school, college, and the minor leagues), or for that matter, any physical skill.  It is also true in mental skills such as decision making.  

But practiced skills must also be recently used to be at their top form.  I once had a job where I controlled tactical fighter aircraft in air combat training.  I was hand-picked for the job because of my proficiency as a ground controlled intercept controller.  I controlled thousands more tactical intercepts with more aircraft involved under higher pressure than the average controller in a tactical control unit.  And yet on Monday morning, having been off the radar scope for two days, I was noticeably less proficient.  The lesson of the war story is that if you work one disaster event a year (real or simulated) you will be less proficient the further you are from that one event, and certainly less proficient than if you work 17. 

 

Monday, January 5, 2015

We Take Key Skills For Granted ...

... and we really should not.  In running an emergency operations center there is often a baseline assumption that the people staffing the facility know how to do certain basic things.  Anyone can make a good log entry or can complete a formatted report or follow an emergency operations plan, as examples.  But experience suggests that clearly is not true.
 
A number of years ago, I was one of the search mission coordinators on a search for a missing aircraft en route from Florida to points north.  We flew hundreds of search aircraft sorties low level in dangerous summer weather looking for the aircraft, at some risk and considerable cost over a two week period.  And the last day we had planned to search, one of our aircrews made the find.   As we were cleaning up the room we had used to coordinate search operations, we found a torn piece of a legal pad with information on it from a witness that pinpointed the crash within 2 miles on a several hundred miles long flight path.  It had slipped down behind a desk. A scribbled date and time indicated that this information was received on the first day of the search.  Someone clearly did not know how to manage witness reports and ensure that the information entered the system.
 
And a number of years ago, I was serving as an observer for a county level disaster exercise.  The Fire Battalion Chief and Emergency Medical Services Captain (both of whom I knew to be highly competent leaders on scene) staffing the emergency services position were presented with a problem.  The dialog went something like this:
 
Fire - "I don't know what to do about this, what do you think we should do ...?"
EMS -  "I don't know either, what do you think?"
Fire -  "I just don't know, maybe we ought to call the Chief"
EMS - "yes, that makes sense, lets call the chief."
Me, trying to be helpful - "well what does the plan say to do in cases like this?"
Fire - "what plan?"
EMS - "a plan?  There is a plan?"
Me, trying to control myself, and pointing to the red, three ring binder, a foot and a half in front of them, clearly labeled EMERGENCY OPERATIONS PLAN in very  large white letters - "well that binder has PLAN on it, so maybe that will help ..."
Fire and EMS pull the plan towards themselves and open the binder.
Fire - "there is a lot of stuff in here."
EMS - "yeah, I can't figure out where to look."
Me, trying to be helpful - "well I can see that there is a tab there that says ANNEX K. EMERGENCY MEDICAL SERVICES ... maybe there is something in there."
Fire, opening up Annex K - "boy, there sure are a lot of words in here."
EMS - "yeah, there are too many words, we better call the chief and ask him what he wants us to do."
Fire, dialing the chief's number - "yeah, the chief will know what to do."
Me, pulling the last handful of hair out of my head, and writing furiously on the comment sheet ...
 
This is not a ding on either fire or emergency medical services personnel.  It could have been anyone.  In years of working in emergency operations centers, I have seen very few staff members actually open up a plan and apply the planning to their problem.
 
We assume that people can do a wide variety of tasks.  And these tasks are important.  In the first case, doing the job correctly in establishing a log and recording the witness information in it, could have saved 13 days of flying search aircraft sortie after search aircraft sortie.  In the second not knowing what to do or where to find guidance delayed the response.  In a real event, with the chief unavailable, what would the two officers have done?
 
The important lesson is that training is important, but a lot of training misses the mark.  The search staffs were trained and all were certified in their jobs.  The fire and EMS officers were very well trained in their day to day jobs.  But no one had sat down, put pencil to paper, and sketched out exactly what unimportant little details they needed to know to be competent in the command/coordination center environment.  Something as simple as writing a good log entry takes ability and knowledge.  So look for the small details, train on those, and your performance will improve.
 
 

Sunday, December 28, 2014

Badges? Open Badges? What is this?

In the classic Western comedy Blazing Saddles, the Bandidos utter the famous line "Badges, we don't need no stinking badges."  So why then are we as a disaster response organization starting a badge program?
 
The answer lies in what modern open badges do.  Traditional verifications of skills have been based on formal education or on worker progress through apprenticeships in trades.  Skills learned through self-study, individual practice, and other learner based approaches were undocumented.  To some degree the explosion of certifications that started in the 1960s in the United States addressed this by creating credentials that documented competency in a wide variety of semi-professional forms of work.
 
However, these certifications mirrored the traditional academic verifications with formal structure and rigid requirements.  In our field, for example, more than one professional certification as an emergency manager specifies that the required documentation must be submitted in a three ring binder, no document protectors are allowed, and that the information must be in exactly the same order as the requirements are listed in the instructions.  That certainly demonstrates competency in preparing an application packet following highly detailed instructions in a highly bureaucratic environment, but its value in documenting the ability to save lives and protect property is open to question.
 
The wider problem is that professional certifications are heavily weighted toward activities that individuals in full time employment with a supervisor who values certification and adequate funding for travel can complete.  A requirement that a certification applicant must have attended certain professional conferences (attendance is a common "contribution to the profession") of a set number of days effectively rules out anyone who does not have employer support, and that means it rules out volunteers.
 
More dangerously, such certifications tend to substitute easily measured events for demonstrated performance of skills.  For example, attending an ungraded training course is assumed to demonstrate that you can perform to an acceptable level the skills it presents in a lecture or short activity setting.  At the same time there is a strong tendency toward traditionalism in learning - classroom courses are valued over online instruction.  And there is an inbuilt prohibition against giving credit for training received more than a certain number of years ago, in emergency management typically five years.  Finally, actually understanding what the certification represents may be difficult and represent a journey through multiple sources.
 
So we have looked carefully at the open badge movement.  First, we are a volunteer organization, so we need verification of capabilities that volunteers can do.  Second, we need portability and easy access, so that the credential and its requirements are readily available to the individual and to those who need to verify capabilities.  Third, we need to recognize the actual skills the individual can perform and the level of those skills.  We need immediacy, so that our credentials represent capabilities when they are demonstrated.  And finally, we need to understand the scope and trajectory of our volunteer's work in building skills, not just a delimited snapshot of a five year interval.  Open badges allow us to do all these things.
 
Our first step is to badge our four existing training levels because these are based on a mix of training, experience, testing, and skill demonstration and are easy to capture.  The next step will be to start to badge specific competencies needed for Internet based disaster work.
 
And if you are wondering what a badge is ...  Badges are simply electronic symbols with embedded metadata that document the achievement of competencies and provide information on who issued the badge, the requirements for its award, and the level of skill that shows.  We are using Achievery as our badging system, and recommend the use of Mozilla's Backpack as a framework for building an electronic badge resume. 

And a disclosure note - the author, Dr. Walter Green, taught for 13 years in an academic emergency management degree program, wrote his dissertation on emergency management certification, at one time held 6 national, state, and professional association emergency management certifications, and wrote the requirements package for Virginia's Professional Emergency Manager certification program when it was introduced.