> On LO19027 you wrote:
> > The competent administration of any 360 requires, among other things:
> >1. Sensitivity to respondent time and energy
> >2. Control questions that detect irratic responding
> >3. Methods of detecting co-variance with the group averages
> >4. Detection of positive and negative bias effects
> >5. Progressive implementation for large groups or teams
> >6. Follow-up assessments detecting behavioral movment and intra-respondent
> reliability.
>
> -- Would you mind making each of these points more explicit? They can be a
> great asset for HR professionals training on this subject.
>
> > Bottom line: Do not administer any 360 tools carelessly or
> > unprofessionally and, do not be in denial regarding the effects of
> > propagation.
>
> -- Same request as above...
>
> Miguel A. Maldonado
> Multimedia Manager
> Fairchild Semiconductor
> Sunnyvale, California
Yes, I would be happy to expound:
1. Sensitivity to respondent time and energy
If you use a 360 with even a small number of participants and respondents,
the number of surveys that must be completed grows large very quickly. To
make sure that you receive quality information about each participant, it is
important that respondents not be overwhelmed with response forms. They will
tend to ^Sget through^T the task and the quality will go down. This is
particularly true if the item content is complex, requiring a more complex
evaluation of participant behavior.
2. Control questions that detect erratic responding
Surveys need control questions to determine the degree to which the
participant read the items, could read the items and do not randomly respond.
For example, to the item: ^SThinks about work while at work,^T the respondent
should not respond with ^SNever, Almost Never^T or ^SSeldom.^T This survey
question appears to be "stupid" only because it was actually read by the
respondent. When respondents get too many respondent forms, some will
randomly respond because they know the evaluation process is anonymous and the
lack of sincerity will not be detected. Garbage in, garbage out.
3. Methods of detecting co-variance with the group averages
If we make the case that the ^Struth^T of a participants^R behavior will be
found in the shared perception of the group, then that perception should have
a central tendency and some degree of variation. For a group of sincere
respondents reviewing behaviorally based survey items, this variance should be
small. One way to account for this is to correlate each respondent with the
group average and discard any respondent that falls below a standard, such as
a correlation lower than +.50. This is similar to judging a diving contest
where the high and low marks are discarded. In this case, all the respondent
surveys can be included as long as they are relatively close to the shared
group perception. Of course, this assumes that respondents are familiar with
the participant and the survey items are behaviorally based, reliable and
assess observable competencies.
4. Detection of positive and negative bias effects
There are two common ways that respondents attempt to bias the outcome of
a 360: 1) by making inappropriately positive evaluations about someone they
like or 2) by making inappropriately negative evaluations about someone they
do not like. Items in the survey can be constructed to detect these bias
effects. Combined with methods found above in point #3, it is not difficult
to eliminate an insincere respondents from the final analysis.
5. Progressive implementation for large groups or teams
Due to the effects of propagation (everybody evaluating everybody else at
one time), using a more planned approach over time will increase the quality
of responding. For example, in a team of 10 where a self-assessment is
included, each participant would need to complete 10 surveys forms, one for
self and one for 9 other team members. If a person happens to be on two
different teams of 10, then 19 forms would need to be completed. This is too
many in a short period of time. It would be better to progressively implement
the process so that no respondent would have more than 3 or 4 respondent
surveys to complete in a given week. Be mindful that these folks also have
jobs to do.
6. Follow-up assessments detecting behavioral movement and intra-respondent
reliability.
Behavioral movement over time is often more important that the current
evaluation of someone^Rs behavior. For coaching purposes, it is useful to
review expected behavioral movement rather than focus on current competencies
or deficiencies. Using a repeated measures design, we are also able to detect
intra-respondent reliability (i.e., how did a person evaluate himself or
herself in the repeated measure). If there is a wide variation in the two
measures for any participant, one would wonder about the degree of sincerity
exhibited in the "self-assessment" by the participant and this can become part
of the coaching process itself.
If anyone is interested in reviewing a flyer about an instrument like this,
send me your address back channel with the subject line: ^SGEMA-Lead flyer.^T
Warm regards, David
David L. Hanson, Ph.D.
Consulting Psychologist
Charlotte, NC
Creator of GEMA-Lead 360
> Request to David L. Hanson
--Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>