Skip to Content Skip to Main Navigation
 

Measuring Technical Writer Productivity 

Editor’s Note: Unfortunately, the images, charts, and tables were hosted on another site, which no longer exists, and over which we have no control. However, the link to download the spreadsheet (the click here link just before the PRODUCTIVITY IS RELATIVE heading) still works. The entire article, including images and tables, is available at: http://intercom.stc.org/magazine/septemberoctober-2010/features-septemberoctober-2010/ but only accessible for STC members. Additionally, you can find them in a PDF at: http://intercom.stc.org/wp-content/uploads/2010/09/Measuring_Productivity.pdf which, so far is publicly available.

by: Pam Swanwick and Juliet Wells Leckenby

Every manager struggles to balance writer workload and project capacity. A simple spreadsheet-based system can help you objectively evaluate assigned tasks, task time and complexity, special projects, and even writer experience levels to more accurately assess individual workload and capacity. The result is a simple but useful representational graph.

In addition to measuring current team capacity and productivity, this method also provides objective metrics to better estimate future project capacity and to support performance evaluations for individual writers.

NOTE: For business use, organizations may reproduce and modify the spreadsheet application we’re showcasing in this article, as long as the following information is stated in the spreadsheet properties:

Source: Pam Swanwick and Juliet Leckenby, McKesson Inc.

To view and/or download the spreadsheet, click here

.

PRODUCTIVITY IS RELATIVE

Metrics are a necessary part of a manager’s job. We need to be able to identify high- and low-performing writers, realistically balance workloads, prove our productivity to upper management, and justify requests for additional headcount. As a manager of a team of writers, what metrics can you use to realistically project your team’s capacity? How can you evaluate your team’s productivity rate? How can you assess the productivity of an individual writer compared to the rest of your team?

Research indicates that no industry standards are available for technical-writer-productivity rates. Some practices, such as page counts, have proven to be counter-productive in our experience. Page counts do not take into account the varying complexity levels of different deliverables; realistically, it takes longer to produce a page of highly technical material, compared to user help.

Measuring time spent on projects is also not a good practice. Writers might put in long hours, but how do you measure how productive they are? How do you identify a writer who is handling twice the work in half the time?

In reality, all performance evaluations of technical writing are subjective. However, if your team is working on related projects with similar outputs, it is possible to develop standard metrics to evaluate writer productivity, relative to a project’s standard deliverables and to other team members.

Relative Variables within Our Team

Some teams create such diverse deliverables that no objective measurement is possible. However, our team of 15 writers produces standardized and consistent deliverables that can be reasonably compared.

  • Our team uses standard templates so that we can compare like to like.
  • Deliverables are limited and consistent across products (online help in HTML format, technical references in PDF format, quick start guides in Word format, and release notes in PDF format).

Relative Productivity Can Be Measured

Using the methods below, we can reasonably assess productivity in three areas:

  • Current writer workload relative to the team
  • Past performance of a writer
  • Future team capacity

In this article, we discuss evaluating writers’ current workloads relative to the team. However, you can adjust the spreadsheet formulas to measure past individual performance or future team capacity.

To measure productivity, we:

  1. Gather data
  2. Calculate work units
  3. Normalize the data
  4. Account for special projects
  5. Normalize the data again
  6. Account for job grade

The basic formula we use is this:

 (# topics or pages) x (complexity of deliverable) x (% of change)

 + (% time spent on special projects)

 x (job grade)

Let’s break it down.

GATHERING DATA

Our tracking spreadsheet includes the following data points, which writers enter and managers verify as necessary:

  • Number of topics (for a help project) or pages (for a Word document or PDF). Early in the document lifecycle, this number is an estimate.
  • Complexity of the deliverable. Our team assigns a numeric value from 1 to 3, although you might develop more nuanced values.
  • Percentage of new or substantially revised content. For example, we assign a value of 100% to a document that must be written from scratch; we might estimate a value of 10% for minimal updates to an existing document.
  • Special projects. The writers record the percentage of time they spend on special projects. Most of our writers volunteer for special projects in addition to their assigned deliverables (for example, updating standards and style guides).

Figure 1 shows an example of the spreadsheet in which the writers enter the data for their projects.

(This image is no longer available. See Editor’s Note at the top of the article.)

The final data point is equally important, but it is not entered by the writers in the spreadsheet:

  • Job grade (for example, entry level, mid-level, senior). Expectations are different for each level.

CALCULATING WORK UNITS

For each deliverable, we multiply these inputs (see Table 1):

(# of topics or pages) x (complexity of deliverable) x (% of new or changed content)

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 1. Calculating work units for each deliverable

We call the resulting number a work unit. We total each writer’s work units so that all individual deliverables are included. Now each writer has a number that reflects his or her total workload from all deliverables (see Table 2).

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 2. Sum of work units for each writer

NORMALIZING THE DATA

The next step is to calibrate the team’s average productivity in terms of total work units. You can derive this number using several methods, such as adding the team’s total work units and dividing by the number of writers; however, we prefer a more subjective approach that takes into consideration the productivity level we want our writers to achieve as a team. For example, if we have 12 writers, we identify three or four writers who consistently meet the average level of productivity we expect from the team, and then average their work units.

Yes, this is subjective, but in this way we can adjust for current working conditions, such as an atypical sprint to meet a tight deadline or a lull in company activity.

Determining the Productivity Factor

We take the total work units for those three or four writers and determine what number we need to divide by to make their numbers close to 100; in other words, the expected productivity is 100%. If Writer X has a total workload number of 1,400, dividing by 14 gets us to 100 (1400 / 14 = 100). Thus, 14 becomes the productivity factor by which we divide all writers’ total work units.

Applying the Productivity Factor

The next step is to divide each writer’s total work units by the productivity factor you have established:

(writer’s total work units) / (productivity factor)

The resulting number is each writer’s current initial workload (see Table 3). A competent, mid-level writer’s workload number should be around 100%. If it is not, you should reassess your calibration numbers.

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 3. Each writer’s normalized initial workload

Accounting for Special Projects

Next, we add the percentage of time spent on special projects to the writer’s initial workload percentage (see Table 4):

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 4. Workloads adjusted for special projects

Normalizing the Data Again

At this point, we normalize the numbers again to bring the average back to near 100. In this case, we multiply all numbers by .8 (see Table 5).

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 5. Workloads normalized again

Accounting for Job Grade

Job grade is the final metric we factor in. We assign a multiplier value to each job grade to quantify the assumption that senior writers are expected to be more productive and maintain a heavier workload than junior writers. For a junior writer, we set the multiplier at 1.0; the mid-level writer multiplier is 0.9, and the senior writer multiplier is 0.8. The final calculation is (see Table 6):

(total workload) x (job grade multiplier)

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 6. Final workload, adjusted for job grade

GRAPHING THE RESULTS

We plot the resulting value for each writer on a bar chart (see Table 7). We find an acceptable range to be within 90-110%.

(total workload) x (job grade multiplier)

(This image is no longer available. See Editor’s Note at the top of the article.)
Table 7. Graph of productivity

BALANCING WORKLOAD

What do you do about writers who are significantly above or below 100%? There are three factors that can be adjusted to change the percentage:

  • Shift deliverables from an overloaded writer to an underloaded one
  • Increase or decrease a writer’s participation in special projects
  • Promote an overloaded junior or mid-level writer

The linked spreadsheet is very useful as a simulation tool in this situation. See what happens if you move a project to a different writer, how much the number changes if you decrease someone’s participation in a special project, or how much the number changes if a junior writer is promoted.

HOW DO WRITERS REACT?

We have experienced a range of reactions from writers when presented with their metrics. (We only show a writer his or her own percentage as compared to 100% and to a team average, not as compared to other writers.) Writers with low numbers have occasionally expressed appreciation for the tangible nature of the metrics. Nevertheless, writers who disagree with the numbers or who are dissatisfied with the process find it difficult to argue with the quantifiable nature of the productivity metrics. In several cases, consistently poor performance numbers have prompted writers to leave the company on their own, sparing us the time, expense, and legal issues associated with terminating an underperforming employee.

CAVEAT EMPTOR

We have used variations of the above metrics and calculations for the past several years to accurately and consistently estimate (a) past performance of a writer relative to his or her peers, (b) current workload of each writer relative to the team, and (c) future team capacity. However, we cannot overemphasize that what we have described is ultimately a subjective process. It must be tailored by each documentation manager to suit the needs and conditions of the specific team.

By objectively measuring what we can, and consistently comparing what can be compared but not easily measured, we have made this system to measure productivity work for us. We hope you can make it work for you, too.

About the Authors

Pam Swanwick (Pam.Swanwick@McKesson.com) has worked as a technical writer and manager for over 20 years, primarily in technology industries. For the past dozen or so years, she has focused on medical software at McKesson Inc. She managed one of McKesson’s product documentation teams for five years. Juliet Wells Leckenby (Juliet.Leckenby@McKesson.com) has worked as a technical writer for almost 20 years, the last five at McKesson. She served as a team lead under Pam and is now the manager of the documentation team.

More Technical Writing Articles

Be Sociable, Share!

6 Comments

  1. By Tweets that mention Technical Writing – Measuring Technical Writer Productivity -- Topsy.comon 24th, February 2011 at 11:54 am

    […] This post was mentioned on Twitter by Julio Vazquez and STCSoCal, Writing Assistance . Writing Assistance said: Measuring Technical Writer Productivity http://ow.ly/42H6F […]

  2. By Pro Blogger Newson 27th, February 2011 at 7:16 pm

    Measuring Technical Writer Productivity…

    […]1 Coment. This post was mentioned on Twiter by Julio Vazquez and STCSoCal, Writing Asistance. Writing Asistance said: Measuring[…]…

  3. By Garth Gersteinon 2nd, March 2016 at 9:25 am

    I don’t understand how you get to the normalized workload units based on your average writer. I get that you normalize the workload to 100% for your average writer, and that gives you a factor that you can use as a divisor for all writers, but you started by just adding up all the workloads for current deliverables right? How do you know the total workload units for that “average writer” are reasonable in the first place?

  4. By WAI_editoron 7th, March 2016 at 12:24 pm

    Garth – Thanks for your comment. As the authors indicate, this is partly a subjective process and is not etched in stone. However, “For example, if we have 12 writers, we identify three or four writers who consistently meet the average level of productivity we expect from the team, and then average their work units.” So a lot has to do with the expected level of average productivity, which is certainly subjective depending on the manager’s expectations.

  5. By Ilanaon 19th, May 2016 at 1:52 am

    The images appear as broken, as is the link to the spreadsheet. Can this information be made available please? This looks like very useful information for our team!

  6. By WAI_editoron 19th, May 2016 at 9:45 am

    Thanks for calling that to our attention. Unfortunately, the images, charts, and tables were hosted on another site, which no longer exists, and over which we have no control. However, the link to download the spreadsheet (the click here link just before the PRODUCTIVITY IS RELATIVE heading) still works. The entire article, including images and tables, is available at: http://intercom.stc.org/magazine/septemberoctober-2010/features-septemberoctober-2010/ but only accessible for STC members. Additionally, you can find them in a PDF at: http://intercom.stc.org/wp-content/uploads/2010/09/Measuring_Productivity.pdf which, so far is publicly available.

Leave a Reply