An article in the New York Times slams 360 reviews for being “cruel” and counterproductive. The author describes 360s as being too often the conduit for mean-spirited attacks, not founded on substance, and reflecting an absence of constructive criticism. I actually agree with the article’s point of view for the types of 360s described. Yet I find 360s to be incredibly helpful to clients if conducted the right way.
The 360s described in the article are of the automated, unmediated type; the employee simply sees raw, unfiltered comments, often without the necessary context– and usually the employee’s manager or HR will automatically see this same output. There is no objective, experienced “facilitator” that stands between the reviewers and the recipient, who can ensure a productive outcome. This type of 360 can cause problems, especially if the organizational culture is less than optimal, for all the reasons the article mentions.
The 360 reviews I use with clients in my executive coaching practice are of a different sort; they provide essential “data” to the client so that they can understand strengths and weaknesses and improve as employees, managers and leaders, no matter what their level. These reviews are far more effective than the automated, unfiltered ones described in the article, for four reasons:
1. An experienced coach asks the questions: I conduct structured interviews with each reviewer, using questions that probe for strengths, areas in need of development, and advice for the client. By conducting these interviews live, I am able to delve deeper into the issues being raised as necessary, which is a key difference vs. the 360s described in the article that involve simply a written response.
2. An experienced coach interprets the results: Any decent business analyst would tell you that just sharing data is not enough. Interpreting the data objectively, and making recommendations, is key. The same holds true for 360s.
When compiling the results of the interviews, I don’t just present the data, i.e. “here’s what they said, good luck” as some automated 360s do. Or, the review results are not simply shared by untrained managers, who may be too invested in a certain point of view to see the implications clearly (or the employee may not truly accept a result from a manager they don’t see as objective). Instead, my interpretation and recommendations are based on an experienced, objective, outsider’s point of view.
3. Specific results are not shared with the manager or HR: The 360 results described in the article are shared directly, which represents a crucial difference. Instead, I work with the employee to incorporate the results into a development plan, which is what ultimately receives the manager’s input and buy-in.
The development plan really should be the bottom line from the manager’s point of view, not the 360 itself, as the 360 is just one of several assessment tools incorporated into the plan (for example, I’ll often combine the 360 with a DiSC assessment to help identify strengths and risks of a client’s communication style). The power of the 360 is in it’s ability to finally open an employee’s eyes to issues that need to be addressed. When used for this purpose, the specific interviewee comments don’t need to be shared. In fact, not sharing them will reduce the likelihood of the abuse mentioned in the article. Instead, the results should be used as input to the development plan.
Importantly, while the client gets actual comments from interviewees, the quotes are randomized and combined in a way that maintains the anonymity of the reviewer.
4. Reviewers are selected by the employee: The selection is limited (six people is pretty typical), and they represent key stakeholders in the employee’s work and success, usually across levels. The list is guided in part by my advice, and is approved by the employee’s manager, usually with no or minor revisions. Having the client select the reviewers helps to ensure the client’s buy-in on the results, and obvious outliers that could bias the analysis are avoided. Too often, automated 360s do not put enough, or any, effort into trying to come up with the right reviewer sample that would give an accurate picture of performance.
Here’s a quick client example. I recently worked with Julie as an executive coaching client. When we analyzed her 360 results, she appreciated hearing that her employees valued the way she communicated with them. In particular, her team liked how she kept them in the loop and invested in the vision and strategy of the department and larger organization. This feedback helped to ensure she would keep using this communication approach with her team.
Yet she also received a consistent message around “micromanagement.” The employees interviewed felt across-the-board that she was spending too much time telling them exactly how they should do their jobs, or getting too involved in things that they could handle. This message was consistent with similar feedback she had received from her boss– that she wasn’t spending enough time on the cross-functional strategic issues that only she (i.e. not her staff) could deal with.
It was a real eye opener for her to see this micromanagement issue so clearly in the 360 results, with me helping her to interpret the comments. Together we created a development plan with “delegation” and “situational leadership” as key components. The plan was approved by her manager, and continued coaching reinforced how to make this plan a reality. The result: by all accounts (via a subsequent review months later) she had become a stronger manager, and those micromanagement issues became a thing of the past.
This approach to 360s works so well that it’s also been adopted by the Five O’Clock Club, an organization with which I’m closely affiliated, as a key tool in their executive coaching program. In summary, 360s need to be applied in a thoughtful and appropriately representative way, with both objectivity and analysis as major features. Having an experienced coach facilitate the process can be the key to their successful implementation.
How 360 Assessments Can Help (Not Hurt) Performance
February 28, 2016 by Robert Hellmann • On-the-job Success, Org. Effectiveness
An article in the New York Times slams 360 reviews for being “cruel” and counterproductive. The author describes 360s as being too often the conduit for mean-spirited attacks, not founded on substance, and reflecting an absence of constructive criticism. I actually agree with the article’s point of view for the types of 360s described. Yet I find 360s to be incredibly helpful to clients if conducted the right way.
The 360s described in the article are of the automated, unmediated type; the employee simply sees raw, unfiltered comments, often without the necessary context– and usually the employee’s manager or HR will automatically see this same output. There is no objective, experienced “facilitator” that stands between the reviewers and the recipient, who can ensure a productive outcome. This type of 360 can cause problems, especially if the organizational culture is less than optimal, for all the reasons the article mentions.
The 360 reviews I use with clients in my executive coaching practice are of a different sort; they provide essential “data” to the client so that they can understand strengths and weaknesses and improve as employees, managers and leaders, no matter what their level. These reviews are far more effective than the automated, unfiltered ones described in the article, for four reasons:
1. An experienced coach asks the questions: I conduct structured interviews with each reviewer, using questions that probe for strengths, areas in need of development, and advice for the client. By conducting these interviews live, I am able to delve deeper into the issues being raised as necessary, which is a key difference vs. the 360s described in the article that involve simply a written response.
2. An experienced coach interprets the results: Any decent business analyst would tell you that just sharing data is not enough. Interpreting the data objectively, and making recommendations, is key. The same holds true for 360s.
When compiling the results of the interviews, I don’t just present the data, i.e. “here’s what they said, good luck” as some automated 360s do. Or, the review results are not simply shared by untrained managers, who may be too invested in a certain point of view to see the implications clearly (or the employee may not truly accept a result from a manager they don’t see as objective). Instead, my interpretation and recommendations are based on an experienced, objective, outsider’s point of view.
3. Specific results are not shared with the manager or HR: The 360 results described in the article are shared directly, which represents a crucial difference. Instead, I work with the employee to incorporate the results into a development plan, which is what ultimately receives the manager’s input and buy-in.
The development plan really should be the bottom line from the manager’s point of view, not the 360 itself, as the 360 is just one of several assessment tools incorporated into the plan (for example, I’ll often combine the 360 with a DiSC assessment to help identify strengths and risks of a client’s communication style). The power of the 360 is in it’s ability to finally open an employee’s eyes to issues that need to be addressed. When used for this purpose, the specific interviewee comments don’t need to be shared. In fact, not sharing them will reduce the likelihood of the abuse mentioned in the article. Instead, the results should be used as input to the development plan.
Importantly, while the client gets actual comments from interviewees, the quotes are randomized and combined in a way that maintains the anonymity of the reviewer.
4. Reviewers are selected by the employee: The selection is limited (six people is pretty typical), and they represent key stakeholders in the employee’s work and success, usually across levels. The list is guided in part by my advice, and is approved by the employee’s manager, usually with no or minor revisions. Having the client select the reviewers helps to ensure the client’s buy-in on the results, and obvious outliers that could bias the analysis are avoided. Too often, automated 360s do not put enough, or any, effort into trying to come up with the right reviewer sample that would give an accurate picture of performance.
Here’s a quick client example. I recently worked with Julie as an executive coaching client. When we analyzed her 360 results, she appreciated hearing that her employees valued the way she communicated with them. In particular, her team liked how she kept them in the loop and invested in the vision and strategy of the department and larger organization. This feedback helped to ensure she would keep using this communication approach with her team.
Yet she also received a consistent message around “micromanagement.” The employees interviewed felt across-the-board that she was spending too much time telling them exactly how they should do their jobs, or getting too involved in things that they could handle. This message was consistent with similar feedback she had received from her boss– that she wasn’t spending enough time on the cross-functional strategic issues that only she (i.e. not her staff) could deal with.
It was a real eye opener for her to see this micromanagement issue so clearly in the 360 results, with me helping her to interpret the comments. Together we created a development plan with “delegation” and “situational leadership” as key components. The plan was approved by her manager, and continued coaching reinforced how to make this plan a reality. The result: by all accounts (via a subsequent review months later) she had become a stronger manager, and those micromanagement issues became a thing of the past.
This approach to 360s works so well that it’s also been adopted by the Five O’Clock Club, an organization with which I’m closely affiliated, as a key tool in their executive coaching program. In summary, 360s need to be applied in a thoughtful and appropriately representative way, with both objectivity and analysis as major features. Having an experienced coach facilitate the process can be the key to their successful implementation.