World’s first stationary POV


I was thinking that a POV is a cool way of getting a low number of LEDs to make an image over a surface. Persistence Of Vision by the way is the idea of moving a small array of LEDs around and they blink on and off quickly. You perceive the little blips as stationary pixels that can form a 2D image.

On the down side, you need some other hardware to move the LED array. Some bright  people have been attaching them to already moving objects such as bicycle wheels.

I got thinking that your eyeball is already on the move so why not exploit that? The problem is that the POV system doesn’t know when your eyeballs are sweeping over it.

I figured if some decoys were set up to attract your eyes, then the POV could assume when your eyes are passing over the array

So here’s the shtick: Position LED A and B such that when your eyes move from A to B, they pass over the LED array.

That diagram shows the basic configuration. Obviously LEDs are just one idea of trying to get control of somebodies eyeballs. Perhaps any moving, spinning or interesting object can be used.

I set up a quick array of 5 LEDs to get a proof of concept. Surprise, surprise it worked. My program made a letter “O” and I was able to perceive the letter “O” when m eyes swept past the LEDs.

[sourcecode language=”cpp”]
/* Stationary POV, Darcy Whyte, */ int bug=1;
int pins[5] = {12,11,10,9,8};
int pointer = 0;
int imageData[10][5] = {

void setup() {if (bug==1) {Serial.begin(9600);}
int i = 0;
for (i=0;i<=4;i++) {pinMode(pins[i], OUTPUT);}

void loop() {
int i = 0;
for (i=0;i<=4;i++) {
if (imageData[pointer][i] == 1) {
digitalWrite(pins[i], LOW);
digitalWrite(pins[i], HIGH);
pointer++;if (pointer>9) {pointer=0;}
//delayMicroseconds(1000000); //eye sweep
delayMicroseconds(10000000); //hand wave


Aug 21st update: wtf? 1994 Okay, “The other first stationary POV in the world”… 🙂

5 thoughts on “World’s first stationary POV

  1. Pingback: Scalable Wearables « Mambohead

  2. I’ve seen that used before with a long ‘stick’ that you could leave laying in a corner of a room displaying messages. In particular, your peripheral vision will pick up the message but when you go to look at the source, it doesn’t exist. LED trolling. 😉

    • Ah ha…. I went goggling and found the source that I had seen before… Circuit Cellar from 1994. There is a scan of that article in this thread :

      I’ve kicked around the idea of building one of these for years… I didn’t realize how many years it had been since I actually saw the article. Wow.

      Cool that you came up with this independently though.

      • Hi Aaron,

        Thanks for the notes.

        I noticed one of guys said:

        In human vision the eyes fixate for a period of time on one point in the visual scene then rapidly move to fixate on another point for a period of time and so on. The rapid jerky movements between points of fixation is so instinctive and natural that most people are unaware of it. Psychologists refer to these eye movements, which may also involve some head movement, as “saccadic” movements. Researchers in psychology of vision have made measurements of the angular rate of saccadic eye movement and found the angular rate to be quite uniform from person to person, and largely not consciously controllable. Angular rates reach a peak at the middle of the saccadic movement, and may be about 220-250 degrees per second for movements of 5° in angle and about 450-500 degrees per second for movements of 20° in angle.

        During saccadic movement, stationary objects in the field of view are not clearly perceived, but an image of the previous scene fixed upon persists for up to about 1/4second, with diminishing intensity. The previous image is immediately supplanted by a new image as the eye rests on the next point of fixation, typically within 1/15 to 1/25 second. This action, which may be easily demonstrated using a device as described in the present invention, takes place so rapidly that one is aware only of a smooth continuum of vision, free from blue caused by movement.

        If you look in my code, you can see that the time between shifting the image rows is pretty small. I’m using like a microsecond or something.

Comments are closed.