Make your own free website on

Some Observations While using EVPmaker with Human Subjects and a Monosyllabic Nonsense Word List

Due to the pressure of practicalities I am going to have to give-up further experiments with EVPmaker at present.

Regarding some results that I had got when using the phrase, "Where is my old friend Ellen?", Jeff King of NZ asked 'Were any of the responses an answer to my question'.

No. They weren't.

This raises a point - which we should consider. if only to avoid looking credulous clowns in the eyes of the world. If you get a phrase which answers your question, but this only happens once in (say) every 30 phrases (say), one must have doubts about whether the answer was really an answer or just a coincidental event - which would have a certain probability of occurring anyway. And that is a very important point. A few weeks ago I proposed an experiment to assess relevance of responses to Prof. Vorster. The only way to be really sure of this, in popular understanding anyway, is to have several sequential exchanges. This I am proposing to do by improving the intelligibility of the Alpha output to where the operator will be able to make a reasonable guess at what is being said at the time of first hearing it. This is likely to take longer than planned due to the aforesaid factors impinging on the program.

Next Professor Jose Feola mentions the similarity between EVPmaker and some salon games of the past. Please elaborate, Jose, after my initial rejection of this point, I do have some faint recollection of hearing something of this sort. More information, Jose, please, as soon as you can.

A prominent member of the AAEVP has said that people were getting a lot of "audio-nasties" with EVPmaker. So it was then supposed that the cause might have something to do with EVPmaker.

I didn't believe it.

In my last emails I described experiments using lists of synthesised words - in the last experiment the words consisted of 13 nonsense monosyllabic words.

In Saturday night's experiment I took that same set of words but used a human speaker. Subject A, 25 years old, intelligent, interested in the 'Other Reality', spent early life in care, poorly educated, then prison, drug abuse, (possibly on recreational drugs even when carrying out the experiment).

Results at slice-thicknesses of 45, 120, and 450 ms were quite reasonable - although with a much lower conversion efficiency than the synthesised words - in most cases. Now here was a subject that should have been liable to lows.

Apart from one phrase that was slightly rude in a juvenile sort of way the rest was quite bland. One noticeable one was 'I task myself' - which seemed to tie in with what 'A' had told me about why he had reformed - he now ruled himself.

Another interesting feature was a single word, 'Campbelltown'. This was not pronounced in the normal way with 'town' being pronounced as "taawin" - instead it was pronounced in the old Scottish way as "Toon" - 'Campbelltoon".

The next day I presented the results to subject A. He was very interested and pleased with the results. But when I asked him, he did not know what 'task myself' meant. He was not aware of the modern usage of task as a verb. He also did not see any significance in 'Campbelltown'.

In aerospace meetings and possibly it comes from the military originally, where, in a meeting's minutes various people are assigned to carry out specific objectives, they are described as being 'tasked' with accomplishing that objective. Perhaps the use of task as a verb has moved into society generally -I dont know.

What this suggests is that perhaps the results are divorced from the person whose words are on the original recording.

Subject B (myself) read the same nonsense list into the recorder and produced a sound file that was twice as long as that produced by the synthetic voicing or subject A. There were nil results as far as comprehensible English/words are concerned. To avoid introducing another variable into the experiment I decided to read the list again, but this time without such long pauses between words.

I did that but it did not make a great deal of difference.

When it came to evaluation, no utterances were perceived. It sounded like words, it sounded like a language - but it wasn't English.

One reassuring point was that it has sometimes been worrying to be able to find words or phrases so often that nobody else had picked up. I knew it was due to so much more experience and knowing what to listen for, being tuned in to anything that seemed to have a glottal rate sound to it - but even so....

It was therefore heartening to find that I was registering zero results for one slice size after another. At 900 ms, for both subjects - A, and B (both tests), it was possible to tell which of the 13 nonsense words was being selected for each slice. The words were assigned numbers from 1 to 13 and the numerical series was examined for randomity. As in the previous case randomity was notably missing for each of the 900 ms slices. The same word or pair of words would be repeated 3, 4 or more times.

I now conclude that what it is is a "stroboscopic" effect. The slicing rate is close to the word repetition rate. Using connected speech and words of different lengths would help but it is suggested that long slices should be avoided.

There was one more conclusion. Having got no results, even down to 45ms, I decided to try decreasing the slice size to 10ms. Nothing. How about 20ms? This time there were at least two utterances. In the second one a name appeared to be given, followed by, 'I want to talk to a paedophile'.

This was surprising. The on-paper "bad-guy" (prison, drugs)got bland stuff, and the on-paper "good guy" (me)got an "Audio-nasty". So did the cause lie in EVPMaker?

I doubt it. One result proves nothing of course, but it is interesting.