Atlassian
Atlassian

Postmortem Template

In today's fast-paced digital world, businesses need to be prepared for unexpected incidents that could disrupt their operations. The Incident Postmortem Template is an essential tool that helps organizations streamline their incident response process and learn from past mistakes. With a focus on clear documentation and detailed analysis, this template allows teams to collect consistent information during each postmortem review, ensuring that valuable lessons are learned and applied to future incidents.

This template covers all crucial aspects of an incident, including the summary, leadup, fault, impact, detection, response, and recovery. By providing a detailed timeline and utilizing the Five Whys technique for root cause identification, teams can gain a deeper understanding of the incident and its underlying causes. This approach enables organizations to learn from past experiences and implement corrective actions to prevent future occurrences.

Atlassian's Incident Postmortem Template is an invaluable resource for any team looking to enhance their incident management process. By adopting this template, teams can ensure clear documentation, effective communication, and continuous improvement, ultimately leading to a more resilient and reliable infrastructure.

Incident Postmortem Template

Clear documentation is key to an effective incident postmortem process. Many teams use a comprehensive template to collect consistent details during each postmortem review. 

Incident summary

Write a summary of the incident in a few sentences. Include what happened, why, the severity of the incident and how long the impact lasted.

Example

Between the hour of __ on __, users encountered __.

The event was triggered by a __ at __.

The __ contained __.

A bug in this code caused __.

The event was detected by __. The team started working on the event by __.

This incident affected __ of users.

There was further impact as noted by __ were raised in relation to this incident. 

Leadup

Describe the sequence of events that led to the incident, for example, previous changes that introduced bugs that had not yet been detected.

Example

At <16:00> on __, (__), a change was introduced to  in order to < THE CHANGES THAT LED TO THE INCIDENT>. 

This change resulted in  __.

Fault

Describe how the change that was implemented didn't work as expected. If available, attach screenshots of relevant data visualizations that illustrate the fault.

Example

__ responses were sent in error to of requests. This went on for __

Impact

Describe how the incident impacted internal and external users during the incident. Include how many support cases were raised.

Example

For __ between __ on __,

__ our users experienced this incident.

This incident affected customers (_% OF USERS), who experienced __.

__ were submitted.

Detection

When did the team detect the incident? How did they know it was happening? How could we improve time-to-detection? Consider: How would we have cut that time by half?

Example

This incident was detected when the was triggered and were paged. 

Next, __ was paged, because __ didn't own the service writing to the disk, delaying the response by __.

__ will be set up by so that __.

Response

Who responded to the incident? When did they respond, and what did they do? Note any delays or obstacles to responding.

Example

After receiving a page at __, __ came online at in __.

This engineer did not have a background in the so a second alert was sent at to __ into the __ who came into the room at __.

Recovery

Describe how the service was restored and the incident was deemed over. Detail how the service was successfully restored and you knew how what steps you needed to take to recovery. 

Depending on the scenario, consider these questions: How could you improve time to mitigation? How could you have cut that time by half?

Example

We used a three-pronged approach to the recovery of the system: 

  1. By Increasing the size of the BuildEng EC3 ASG to increase the number of nodes available to support the workload and reduce the likelihood of scheduling on oversubscribed nodes
  2. Disabled the Escalator autoscaler to prevent the cluster from aggressively scaling-down
  3. Reverting the Build Engineering scheduler to the previous version.

Timeline

Detail the incident timeline. We recommend using UTC to standardize for timezones.
Include any notable lead-up events, any starts of activity, the first known impact, and escalations. Note any decisions or changed made, and when the incident ended, along with any post-impact events of note. 

Example

All times are UTC.

  1. 11:48 — K8S 1.9 upgrade of control plane is finished 
  2. 12:46 — Upgrade to V1.9 completed, including cluster-auto scaler and the BuildEng scheduler instance 
  3. 14:20 — Build Engineering reports a problem to the KITT Disturbed
  4. 14:27 — KITT Disturbed starts investigating failures of a specific EC2 instance (ip-203-153-8-204) 
  5. 14:42 — KITT Disturbed cordons the node 
  6. 14:49 — BuildEng reports the problem as affecting more than just one node. 86 instances of the problem show failures are more systemic 
  7. 15:00 — KITT Disturbed suggests switching to the standard scheduler 
  8. 15:34 — BuildEng reports 200 pods failed 
  9. 16:00 — BuildEng kills all failed builds with OutOfCpu reports 
  10. 16:13 — BuildEng reports the failures are consistently recurring with new builds and were not just transient. 
  11. 16:30 — KITT recognize the failures as an incident and run it as an incident. 
  12. 16:36 — KITT disable the Escalator autoscaler to prevent the autoscaler from removing compute to alleviate the problem.
  13. 16:40 — KITT confirms ASG is stable, cluster load is normal and customer impact resolved.

Template

  1. XX:XX UTC — INCIDENT ACTIVITY; ACTION TAKEN
  2. XX:XX UTC — INCIDENT ACTIVITY; ACTION TAKEN
  3. XX:XX UTC — INCIDENT ACTIVITY; ACTION TAKEN

Root cause identification: The Five Whys

The Five Whys is a root cause identification technique. Here’s how you can use it:
  1. /infoBegin with a description of the impact and ask why it occurred. 
  2. Note the impact that it had.  
  3. Ask why this happened, and why it had the resulting impact. 
  4. Then, continue asking “why” until you arrive at a root cause.
List the "whys" in your postmortem documentation.

Example

  1. The application had an outage because the database was locked
  2. The database locked because there were too many writes to the database
  3. Because we pushed a change to the service and didn’t expect the elevated writes
  4. Because we don't have a development process established for load testing changes
  5. Because we never felt load testing was necessary until we reached this level of scale.

Root cause

Note the final root cause of the incident, the thing identified that needs to change in order to prevent this class of incident from happening again.

Example

A bug in connection pool handling led to leaked connections under failure conditions, combined with lack of visibility into connection state.

Backlog check

Review your engineering backlog to find out if there was any unplanned work there that could have prevented this incident, or at least reduced its impact? 
A clear-eyed assessment of the backlog can shed light on past decisions around priority and risk.

Example

No specific items in the backlog that could have improved this service. There is a note about improvements to flow typing, and these were ongoing tasks with workflows in place.  

There have been tickets submitted for improving integration tests but so far they haven't been successful.

Recurrence

Now that you know the root cause, can you look back and see any other incidents that could have the same root cause? If yes, note what mitigation was attempted in those incidents and ask why this incident occurred again.

Example

This same root cause resulted in incidents HOT-13432, HOT-14932 and HOT-19452.

Lessons learned

Discuss what went well in the incident response, what could have been improved, and where there are opportunities for improvement.

Example

  1. Need a unit test to verify the rate-limiter for work has been properly maintained
  2. Bulk operation workloads which are atypical of normal operation should be reviewed
  3. Bulk ops should start slowly and monitored, increasing when service metrics appear nominal

Corrective actions

Describe the corrective action ordered to prevent this class of incident in the future. Note who is responsible and when they have to complete the work and where that work is being tracked.

Example

  1. Manual auto-scaling rate limit put in place temporarily to limit failures
  2. Unit test and re-introduction of job rate limiting
  3. Introduction of a secondary mechanism to collect distributed rate information across cluster to guide scaling effects

In today's fast-paced digital world, businesses need to be prepared for unexpected incidents that could disrupt their operations. The Incident Postmortem Template is an essential tool that helps organizations streamline their incident response process and learn from past mistakes. With a focus on clear documentation and detailed analysis, this template allows teams to collect consistent information during each postmortem review, ensuring that valuable lessons are learned and applied to future incidents.

This template covers all crucial aspects of an incident, including the summary, leadup, fault, impact, detection, response, and recovery. By providing a detailed timeline and utilizing the Five Whys technique for root cause identification, teams can gain a deeper understanding of the incident and its underlying causes. This approach enables organizations to learn from past experiences and implement corrective actions to prevent future occurrences.

Atlassian's Incident Postmortem Template is an invaluable resource for any team looking to enhance their incident management process. By adopting this template, teams can ensure clear documentation, effective communication, and continuous improvement, ultimately leading to a more resilient and reliable infrastructure.

Related examples in Postmortems
Amazon
Amazon
Chiller Correction of Errors
Google
Google
Postmortem Example