Most often, information solicited in a 360-degree feedback process will include feedback from an employee's subordinates, peers (colleagues), and supervisor(s), as well as a self-evaluation by the employee him or herself. Such feedback can also include, when relevant, feedback from external sources who interact with the employee, such as customers and suppliers or other interested stakeholders. 360-degree feedback, also known as multi-rater feedback, multi source feedback, or multi source assessment, is so named because it solicits feedback regarding an employee's behavior from a variety of points of view (subordinate, lateral, and supervisory). It therefore may be contrasted with "downward feedback" (traditional feedback on work behavior and performance delivered to subordinates by supervisory or management employees only; see traditional performance appraisal), or "upward feedback" delivered to supervisory or management employees by subordinates only.
Organizations have most commonly utilized 360-degree feedback for developmental purposes, providing it to employees to assist them in developing work skills and behaviors. However, organizations are increasingly using 360-degree feedback in performance evaluations and employment decisions (e.g., pay; promotions). When 360-degree feedback is used for performance evaluation purposes, it is sometimes called a "360-degree review."
There is a great deal of debate as to whether 360-degree feedback should be used exclusively for development purposes or for evaluation purposes as well. This is due primarily to feedback providers' subjectivity and motivations, inter-rater variations, and whether feedback providers have the ability to fairly evaluate attainment of work and organizational objectives. While these issues exist when 360-degree feedback is used for development, they are more prominent when employers use them for performance evaluation purposes, as they can unfairly influence employment decisions, and even lead to legal liability.
History
The German military first began gathering feedback from multiple sources in order to evaluate performance during World War II. Others also explored the use of multi-rater feedback during this time period via the concept of T-groups.
One of the earliest recorded uses of surveys to gather information about employees occurred in the 1950s at Esso Research and Engineering Company. From there, the idea of 360 degree feedback gained momentum, and by the 1990s most human resources and organizational development professionals understood the concept. The problem was that collecting and collating the feedback demanded a paper-based effort including either complex manual calculations or lengthy delays. The first led to despair on the part of practitioners; the second to a gradual erosion of commitment by recipients.
However, due to the rise of the Internet and the ability to conduct evaluations online with surveys, multi-rater feedback use steadily increased in popularity. Outsourcing of human resources functions also has created a strong market for 360-degree feedback products from consultants. This has led to a proliferation of 360-degree feedback tools on the market.
Today, studies suggest that over one-third of U.S. companies use some type of multi-source feedback. Others claim that this estimate is closer to 90% of all Fortune 500 firms. In recent years, this has become encouraged as Internet-based services have become standard in corporate development, with a growing menu of useful features (e.g., multi languages, comparative reporting, and aggregate reporting). However, issues abound regarding such systems validity and reliability, particularly when used in performance appraisals.
Issues
Many 360-degree feedback tools are not customized to the needs of the organizations in which they are used. 360-degree feedback is not equally useful in all types of organizations and with all types of jobs. Additionally, using 360-degree feedback tools for appraisal purposes has increasingly come under fire as performance criteria may not be valid and job based, employees may not be adequately trained to evaluate a co-worker's performance, and feedback providers can manipulate these systems. Employee manipulation of feedback ratings has been reported in some companies who have utilized 360-degree feedback for performance evaluation including GE (Welch 2001), IBM (Linman 2011), and Amazon (Kantor and Streitfeld 2015).
The U.S. military has criticized its own use of 360-degree feedback programs in employment decisions because of problems with validity and reliability. Other branches of the U.S. government have questioned 360-degree feedback reviews as well. Still, these organizations continue to use multi-rater feedback in their development processes.
Accuracy
A study on the patterns of rater accuracy shows that the length of time that a rater has known the individual being evaluated has the most significant effect on the accuracy of a 360-degree review. The study shows that subjects in the group "known for one to three years" are the most accurate, followed by those "known for less than one year," followed by those "known for three to five years" and the least accurate being those "known for more than five years." The study concludes that the most accurate ratings come from those who have known the individual being reviewed long enough to get past the first impression, but not so long that they begin to generalize favorably.
It has been suggested that multi-rater assessments often generate conflicting opinions and that there may be no way to determine whose feedback is accurate. Studies have also indicated that self-ratings are generally significantly higher than the ratings given from others. The motivations and biases of feedback providers must be taken into account.
Results
Several studies indicate that the use of 360-degree feedback helps to improve employee performance because it helps the evaluated see different perspectives of their performance. In a 5-year study, no improvement in overall rater scores was found between the 1st and 2nd year, but higher scores were noted between 2nd and 3rd and 3rd and 4th years. Reilly et al. (1996) found that performance increased between the 1st and 2nd administrations, and sustained this improvement 2 years later. Additional studies show that 360-degree feedback may be predictive of future performance.
Some authors maintain, however, that there are too many lurking variables related to 360-degree evaluations to reliably generalize their effectiveness. Bracken et al. (2001b) and Bracken and Timmreck (2001) focus on process features that are likely to also have major effects on creating behavior change. Greguras and Robie (1998) tracked how the number of raters used in each particular category (direct report, peer, manager) affects the reliability of the feedback. Their research showed that direct reports are the least reliable and, therefore, more participation is required to produce a reliable result. Multiple pieces of research have demonstrated that the scale of responses can have a major effect on the results, and some response scales are better than others. Goldsmith and Underhill (2001) report the powerful influence of the evaluated individual following up with raters to discuss their results, which cannot be done when feedback is anonymous. Other potentially powerful factors affecting behavior change include how raters are selected, manager approval, instrument quality, rater training and orientation, participant training, supervisor training, coaching, integration with HR systems, and accountability.
Some researchers claim that the use of multi-rater assessment does not improve company performance. One 2001 study found that 360 degree feedback was associated with a 10.6 percent decrease in market value, and concludes that "there is no data showing that [360-degree feedback] actually improves productivity, increases retention, decreases grievances, or is superior to forced ranking and standard performance appraisal systems."
One group of studies proposed four paradoxes that explain why 360 evaluations do not elicit accurate data: (1) the Paradox of Roles, in which an evaluator is conflicted by being both peer and the judge; (2) the Paradox of Group Performance, which admits that the vast majority of work done in a corporate setting is done in groups, not individually; (3) the Measurement Paradox, which shows that qualitative, or in-person techniques are much more effective than mere ratings in facilitating change; and (4) the Paradox of Rewards, which shows that individuals evaluating their peers care more about the rewards associated with finishing the task than the actual content of the evaluation itself.
Additional studies found no correlation between an employee's multi-rater assessment scores and his or her top-down performance appraisal scores (provided by the person's supervisor). They advise that although multi-rater feedback can be effectively used for appraisal, care needs to be taken in its implementation or results will be compromised. This research suggests that 360-degree feedback and performance appraisals get at different outcomes. Therefore, traditional performance appraisals as well as 360-degree feedback should be used in evaluating overall performance.