Air France 447 and Deepwater Horizon: Considerations For Team Readiness

Published on January 17, 2012 in Case Studies, News Worthy | No Comments

In the dark, early morning hours of June 1, 2009, Air France Flight 447, en route to Paris over the Atlantic Ocean, lost some of its instrumentation while attempting to dodge a thunderstorm. The flight crew became confused and lost control of the aircraft. The aircraft pitched upward and stalled. Unable to recover, the Airbus A330 fell 38,000 feet, hitting the water at 124 miles per hour, and killing all 228 people on board.

Less than one year later, on the night of April 20, 2010, an explosion occurred on the Deepwater Horizon oil drilling rig 49 miles off the coast of Louisiana. 11 people were killed and many were injured. Oil from the well gushed uncontrollably 5000 feet below the water’s surface. Before being capped on July 15th, more than 4 million barrels of oil flowed into the Gulf of Mexico. The oil destroyed wildlife and wreaked havoc on the people, landscapes, and businesses of the gulf coast area.

Two horrific disasters. Each involving a highly skilled team of people working together to perform complex job operations. Each team using sophisticated equipment and technology, with little margin for error. These events are disturbing, not just because they involved so much death, horror, and destruction, but because they happened. Transatlantic commercial airline flying and deepwater oil drilling are not new endeavors. We might be more understanding if these tragedies occurred a hundred years ago, in the early days of flying and offshore drilling. Instead, they happened in the 21st century, far down the experience curve, with so much proven technology, good operational management know-how, and training capability at our disposal. Why?

These incidents should stir team members and team leaders to action. Not just aviation and oil industries teams, but all teams should examine these incidents and ask, “What lessons can my team learn from these incidents? What can we do to make sure that our team does not fall victim to similar circumstances?”

Following the summaries of the incident reports shown below, we look at considerations for assessing team readiness to help prevent similar incidents from happening.

TeamReadiness




Summary of Investigative Reports

Air France 447
Excerpts from:
Air France 447 Interim Report 3
Interim Report Number 3
On the Accident on 1st June 2009
to the Airbus A330-203
registered F-GZCP, operated by Air France
flight AF 447 Rio de Janeiro – Paris
Bureau d’Enquêtes et d’Analyses (BEA)
Deepwater Horizon
Excerpts from:
Deepwater Report to President
Deepwater
The Gulf Oil Disaster and
the Future of Offshore Drilling
Report to the President
National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling

    On 31 May 2009, flight AF 447 took off from Rio de Janeiro Galeão [Brazil] airport bound for Paris Charles de Gaulle [France]. At around 2:02 AM, the Captain left the cockpit. At around 2:08 AM, the crew made a course change of about ten degrees to the left, probably to avoid echoes [thunderstorms] detected by the weather radar.

    At 2:10:05 AM, likely following the obstruction of the Pitot probes in an ice crystal environment, the [air] speed indications became erroneous and the automatic systems [autopilot] disconnected. The airplane’s flight path was not brought under control by the two copilots, who were rejoined shortly after by the Captain. The airplane went into a stall that lasted until the impact with the sea at 2:14:28 AM.

    Conclusions:

    • At the time of the autopilot disconnection, the Captain was taking a rest.

    • The departure of the Captain was done without leaving any clear operational instructions, in particular on the role of each of the copilots.

    • The copilots had not received any training, at high altitude, in the “Unreliable IAS [Indicated Air Speed]” procedure and manual aircraft handling.

    • There was an inconsistency between the speeds measured, likely following the blockage of the Pitot probes in an ice crystal environment.

    • Although having identified and called out the loss of the speed indications, neither of the two copilots called the procedure “Unreliable IAS.”

    • In less than one minute after the autopilot disconnection, the airplane exited its flight envelope [exceeded its design capability] following inputs that were mainly pitch-up.

    • The Captain came back into the cockpit about 1 min 30 after the autopilot disconnection.

    • There was no explicit task-sharing between the two copilots.

    • There is no CRM [Cockpit or Crew Resource Management] training for a crew made up of two copilots in a situation with a relief Captain.

    • The airplane’s angle of attack is not directly displayed to the pilots.

    • The approach to stall was characterised by the triggering of the warning then the appearance of buffet.

    • Neither of the pilots formally identified the stall situation.

    • The engines functioned normally and always responded to the crew’s inputs.

    Recommendations:

    • Examination of their last training records and check rides made it clear that the copilots had not been trained for manual airplane handling of approach to stall and stall recovery at high altitude.

      • Review the content of check and training programmes and make mandatory…specific and regular exercises dedicated to manual aircraft handling of approach to stall and stall recovery, including at high altitude.

    • A crew consisting of two copilots does not guarantee a level of performance equivalent to a crew consisting of a Captain and a copilot when faced with a degraded situation. The absence of a hierarchy and of effective task-sharing in the cockpit strongly contributed to the low level of synergy.

      • Define additional criteria…to ensure better task-sharing in case of relief crews.

    • The crew never formally identified the stall situation. Information on angle of attack is not directly accessible to pilots. Only a direct readout of the angle of attack could enable crews to rapidly identify the aerodynamic situation of the airplane and take the actions that may be required.

      • Evaluate the relevance of requiring the presence of an angle of attack indicator directly accessible to pilots.

    Changes After the Accident:

    • Thales “BA” pitot tube probes replaced by Goodrich probes.

      • Investigations have indicated that aircraft with Thales pitot probes “appear to have a greater susceptibility” to the adverse conditions than those with Goodrich probes. Occurrences of “airspeed indication discrepancies” have been reported during flights at high altitude in inclement weather conditions.

    • Strengthening the role of copilots.

      • On-going implementation of a new decision-making methods: Copilot expresses his
        or her opinion first, prior to the final decision being made by the Captain.

    • Crew training.

      • Additional session entitled “Unreliable IAS[Indicated Air Speed].

      • Design of a self-learning module for reinforced crew and captains.

    AF447_DeepwaterHorizon

    The immediate cause of the blowout was a failure to contain hydrocarbon pressures in the well. Three things could have contained those pressures: the cement at the bottom of the well, the mud in the well and in the riser, and the blowout preventer. But mistakes and failures to appreciate risk compromised each of those potential barriers, steadily depriving the rig crew of safeguards until the blowout was inevitable and, at the very end, uncontrollable.

    The well blew out because a number of separate risk factors, oversights, and outright mistakes combined to overwhelm the safeguards meant to prevent just such an event from happening. But most of the mistakes and oversights can be traced back to a single overarching failure of management. Better management would almost certainly have prevented the blowout by improving the ability of individuals involved to identify the risks they faced, and to properly evaluate, communicate, and address them.

    Causes and Consequences:

    • Cementing

      • Decision to employ a long string casing… increase[d] the difficulty of obtaining a reliable cement job… should have led to heightened alert for any signs of cement failure.

      • Float-valve conversion and circulating pressure. Team failed to consider whether the anomalous pressure readings may have indicated other problems or increased the risk of the cement job.

      • Cement evaluation log decision. The team erred by focusing on full returns as the sole criterion for deciding whether to run a cement evaluation log. Other evaluation tools could have provided more direct information… should have sought other indications in light of the many issues.

      • Foam cement testing. independent testing strongly suggests the foam cement slurry was unstable… should have prompted the company to reconsider its slurry design.

      • Risk evaluation of cementing decisions and procedures. Failure to exercise special caution (and to direct contractors to be especially vigilant) before relying on the cement.

    • Negative-Pressure Test

      • It is undisputed that the negative-pressure test was conducted and interpreted improperly.

      • No procedure for running the test, and had not formally trained personnel in how to do so.

      • Did not provide the Well Site Leaders or crew with procedures for the negative-pressure test.

      • No policy that required personnel to call back to shore for a second opinion about confusing data.

      • Due to poor communication, does not appear that the men performing the test had a full appreciation of the context in which they were performing it.

    • Temporary Abandonment Procedures

      • No evidence that the team ever evaluated options or relative risks. Relying on cement integrity put a significant premium on the negative-pressure test and monitoring, which are subject to human error. Decision increased the risk of a blowout.

    • Kick Detection

      • Missed critical signs that a kick was occurring. The crew could have prevented the blowout—or reduced its impact—if they had reacted in a timely manner.

      • A number of things confounded ability to interpret signals from the well. Instrumentation must be improved. No reason why automated alarms and algorithms cannot be built into the display to alert anomalies. No longer acceptable to rely on a system that requires the right person to be looking at the right data at the right time, and then to understand its significance in spite of simultaneous activities and responsibilities.

    • Diversion and Blowout Preventer Activation

      • The crew should have diverted the flow overboard when mud started spewing; also should have activated the blind shear ram to close in the well. Likely given the crew more time and limited the impact of the explosion.

      • Possible explanations for why the crew did neither: may not have recognized the severity of the situation; did not have much time; and perhaps most significantly, the crew had not been trained adequately how to respond to such an emergency.

    • Overarching Management Failures

      • Most, if not all, of the failures can be traced back to management and communication. Better decision making processes, communication within and between contractors, and training of personnel would have prevented the incident.

      • Must have effective systems in place for integrating the various corporate cultures, internal procedures, and decision making protocols of the different contractors involved.

      • Did not adequately address risks created by late changes to well design and procedures. Changes to drilling procedures in the days before implementation are typically not subject to peer-review or Management of Change (MOC) process. Decisions appear to have been made by the team ad hoc, without formal risk analysis or review. This appears to have been a key causal factor of the blowout.

      • Information excessively compartmentalized as a result of poor communication. Did not share important information with contractors, or members of its own team. Contractors did not share important information with each other. Individuals often making critical decisions without a full appreciation for the context in which they were made.

      • Failed to communicate lessons learned from similar near-miss in the North Sea four months prior. A PowerPoint presentation and an “operations advisory” ever made it to the Deepwater Horizon. Had the crew been adequately informed and trained on its lessons, events may have unfolded very differently.

      • Decision making processes did not ensure that personnel fully considered the risks created by time- and money-saving decisions. Nothing wrong with choosing a less-costly or less-time-consuming alternative—as long as it is proven to be equally safe. Problem is that no formal system for ensuring that alternative procedures were equally safe. None of decisions subject to risk-analysis, peer-review, or management of change process.

    • Documentation

      • Some crews complained that the safety manual was “unstructured,” “hard to navigate,” and “not written with the end user in mind”; and that there is “poor distinction between what is required and how this should be achieved.

    Summary Quote:
    What we’ve seen in this cockpit voice recorder is two pilots extremely confused about whats happening on their flight deck, whats happening to their instrumentation.
    – Mark Rosenker, Former Chairman, National Transportation Safety Board

    Summary Quote:
    There was sort of a “chest bumping” kind of deal. Communications seemed to really break down as to who was ultimately in-charge.
    – Mike Williams, Chief Electronics Technician, Deepwater Horizon





Considerations For Team Readiness


A team achieves readiness when each individual member can do his or her job — on time, safely, according to specification, within budget, and as expected. Achieving team readiness means that jobs get done in the proper way and with the desired results. Team members must have the proper know-how and ability. They must display the appropriate behavior in the course of performing their duties. What do the AF 447 and Deepwater incidents reveal about your team’s level of readiness? What can each of us learn from these tragedies that we can apply to our own teams to prevent problems and improve readiness capability?


1. Collaboration.

  • Both AF 447 and Deepwater Horizon Reports indicate team members had problems working together to perform their duties and that the lack of effective communication and involvement among team members contributed to the accidents.
  • There appears to be problems with team work both internal to some of the companies on Deepwater and also involving external team members (e.g., general contractor not collaborating with sub-contractors).
  • The AF 447 Report sited a lack of operational instructions for the copilots, no Crew Resource Management (CRM) training for the circumstances, and an absence of a hierarchy and of effective task-sharing in the cockpit strongly contributed to the low level of synergy.
  • The Deepwater Report sites “the overarching failure of management” and the need for “improving the ability of individuals involved to identify the risks they faced, and to properly evaluate, communicate, and address them.”
  • Teams should consider having Mission-Specific Crew Resource Management systems in place for all personnel on a project (General Contractor and Sub-contractor team members).

             Crew Resource Management

  • Crew Resource Management systems are intended to foster collaboration, cooperation, and synergy among team members. The goal is obtain optimal levels of communication, problem solving, and decision making among team members so they can accomplish their mission safely, successfully, and as expected.

2. Situational Awareness.

  • Situational Awareness is a key component of Crew Resource Management. It requires having the correct understanding of what is going on in a given environment, time, and space so that the appropriate action can be taken.

             Situational Awareness

  • Situational Awareness requires that people have access to necessary information (past and present). It requires that they interpret that information to control, influence, predict, or respond appropriately to what is happening. It Also requires they make an assessment of what is likely to happen in the future, and that they especially consider the risks of what could go wrong.
  • It appears that both Air France 447 and Deepwater Horizon teams were not fully aware of the situation they were in (e.g., airplane stall, cement job reliability, kick detection, etc.). Therefore, team members did not take the appropriate actions to prevent the disasters.
  • Teams should consider their ability to have Situational Awareness in each step of the jobs they have been charged with executing. They must deal with the potential risks, hazards, and incidents in which they may find themselves.

3. Indicators & Instrumentation.

  • To achieve situational awareness, team members need indicators and instruments that provide them with good information so they can make the appropriate decisions and take appropriate action at the appropriate time. They need indicators that are easy-to-read, understand, and interpret; that are reliable and work in all operating conditions and environments; and are helpful in managing all likely situations.
  • Team members should have an instrument panel with the indicators they need for each job, mission, or situation they may encounter. What does your instrument panel look like?
  • For a given job or situation do team members have:

    1. The information they need?

    2. Is the information easy-to-access and read?

    3. Is the information easy-to-understand and interpret?

    4. Is the information reliable and available for all conditions (e.g., AF 447 pitot tube froze)?

    5. To protect against failure or ensure reliability, do you need redundant, backup, or alternative indicators?

             AoA and Kick Detector

  • It appears that both Air France 447 and Deepwater Horizon teams were not fully aware of the situation they were in (e.g., airplane stall, cement job reliability, kick detection, etc.). Situational Awareness was compromised because of indicator and instrumentation problems. Both teams either did not have indicators that might have helped (e.g., angle of attack indicator), and/or indicators that failed (e.g., airspeed indicators), and/or indicators that were confusing to read, understand, and warn of trouble (e.g., kick detector).

4. Procedures, Practices, & Policies.

  • There were problems with procedures on both AF 447 and Deepwater Horizon. In key situations, procedures did not exist, were not used, or were not helpful.
  • Teams should consider the procedures, best practices, lessons learned and policies they need to be effective, at each step in their job process or for the incidents or situations they may encounter.

             Procedures Practices Policies

  • Consider the comment from the Deepwater Horizon investigation, that the safety manual was “unstructured,” “hard to navigate,” and “not written with the end user in mind”; and that there is “poor distinction between what is required and how this should be achieved.” As the graphic above illustrates, these types of issues need to be addressed when creating documentation.

5. Training.

  • In the case of AF 447, the copilots had not received any training, at high altitude, in the “Unreliable IAS [Indicated Air Speed]” procedure and manual aircraft handling. Similarly, there were issues with task-sharing and Crew Resource Management elements. These things appear to have significantly contributed to the crews ability to recover from the stall and avoid the disaster. Self-learning training modules were among the items recommended by the investigating committee.
  • On Deepwater Horizon, team members were not trained on how to conduct the negative-pressure test or adequately trained on how to respond to mud spewing from the rig floor. Training may have helped to avoid or lesson the impact of the explosion. There were also many deficiencies in management systems, team work, risk assessment, leadership, and decision making.

             Training

  • Consider your training needs and the reasons that training is not done on a particular issue in your team. Good training programs do not have to be cost prohibitive, difficult to implement or lacking in scope. Give your team the training it needs.



DMW by line3




How Can TeamReadiness Help You Prevent Accidents?


  • 1. We document your plans, procedures, processes, policies, best practices, and training courses.

  • 2. TeamReadiness documentation is highly visual, easy-to-use, and easy-to-understand.

  • 3. TeamReadiness documentation provides powerful knowledge that team members, suppliers, and customers need.

  • 4. Our TRM On Demand™ software provides easy and secure online access to your documentation, training, and records.

  • 5. Easily create online tests and course completion certificates that reinforce learning and build esprit de corps.

  • 6. Easily communicate and collaborate with your team to optimize knowledge and learning.

  • 7. TeamReadiness gets people involved and engaged in ways that promote “buy-in,” sharing, collaboration, and cooperation.

  • 8. TeamReadiness documentation breaks learning down into small, quick, easily understood videos or other formats.

Let TeamReadiness help. We can assist with the entire process to quickly and cost effectively help team members grow their capabilities and achieve team readiness!




DMW

TeamReadiness® can help!




Comments are closed.