Problems>Solutions>Innovations - - Lyn Buchanan's CRV




How Do You Score Session Results?

We score sessions in a manner which is far different from the way it is done in the research laboratory. We are more "real world" oriented, and use the scores for much more than just bulk research data. We use the scoring to develop VIEWER PROFILES: that is, profiles of a viewer's strengths and weaknesses. That gives us a DEPENDABILITY rating for each viewer for each category of information. We then use those profiles and dependability ratings to help us in viewer selection and ongoing viewer training.

The individual session scores provide information for a viewer profile database that P>S>I keeps on the work of all its associated viewers. Instead of trying for an overall "accuracy" score, we track that database to find out what each viewer's strengths and weaknesses are. Therefore, when we need a viewer to work on, say, the color of the getaway car, we can look in the database and see which viewers have the highest percentage of correct perceptions of colors. This gives us that viewer's "DEPENDABILITY RATING" as far as colors are concerned. Some other viewer might be great at perceiving sizes and shapes correctly, but be very weak at perceiving colors. We can use him/her to answer another question. Right now, we need someone who is very dependable at reporting the correct color. And so it goes for each question tasked to us by police departments, etc. "What kind of ship is carrying the drugs?" For this one, we would select the viewer who is best at shapes and sizes. "What kind of drug is being smuggled in?" For this, we would get the viewer who is best at smells and tastes.

It became obvious many years back that different viewers have different strengths and weaknesses in the remote viewing arena. It seems logical, then, to look at the tasking which comes in and assign to that task the most proficient viewer for that task. (In actual practice, tasking comes in and I break it up into its component questions, then task each question to the viewer whose track record shows that he/she is most dependable at answering that type of question.)

However, to do this requires a lot more than just having a monitor or analyst think back on a viewer's past results and say, "You know, Joe Smith is really good at that - let's give him the task." That is nothing more than a personal value judgement. In order to know exactly what a viewer's strengths and weaknesses are, you have to have to collect a LOT of data, organize and keep it properly, and then do a LOT of analytic work on it. What develops is then no longer a personal value judgement, but an exact VIEWER PROFILE.

There are certain requirements:

First, you must have feedback in order to judge each perception correctly. I agree with Ingo Swann that, if you don't have feedback, you may be doing a lot of amazing stuff, but you aren't doing Controlled Remote Viewing.
Second, you must have a "non-waffled" scoring system. If, for example, the viewer says,

"There is a red moving vehicle against an unmarked, green background."

...and the feedback shows that Object #1 is a green vehicle against a plain red background, and you cannot tell whether it is moving or not. You have a problem in scoring. A debunker would say, "He got the red in the wrong place! See? Remote viewing doesn't work at all!" A person who is desperate to believe anything and everything psychic would say, "Well, he got the red right, just in the wrong place, and most vehicles move, so let's give him credit for those two perceptions". Each of these is as unscientific, illogical, and undependable as the other. (BTW: the second example is what usually happens in scoring a viewer's session by most people's methods). In order to facilitate a "non-waffled" scoring environment, I devised the following "outlined summary" method for restructuring the viewer's perceptions into a more judgable format for evaluation against feedback. The perceptions are placed one per line, so each can be judged individually, and placed in outline order so you can easily see the context in which each perception was reported. The viewer's statement is changed from:

"There is a red moving vehicle against an unmarked, green background."

to:

There is a vehicle
.....which is red
.....and moving
.....against a background
..........which is unmarked
..........and green