Quantcast
Channel: beginners - openFrameworks
Viewing all articles
Browse latest Browse all 4929

Real time video delay | port from Processing

$
0
0

@Gil_Fuser wrote:

Hello everybody
I have a little piece of code in Processing but I would like to have the same functionality in oF instead.
The code make and playback some seconds of a webcam mixed with the original image.

The problem is: I don't know were to start.
I have been looking in the addons to see if there is some example that could be a starting point, but I had no luck so far.
ofxVideoBuffer, ofxVideoUtils and ofxPlaymodes were my bets. But they let me down.

The ofxVideoBuffer basic video grabber example and the multi-tap, which seemed (by their names) good starting points, are missing this header, which I couldn't find anywhere: ofxVideoFrame.h

ofxVideoUtils I got a .../BufferLoaderEvents.h:29: error: Poco/Exception.h: No such file or directory

and ofxPlaymodes examples complains also about not finding Poco/* like in: .../addons/ofxPlaymodes/src/utils/pmUtils.h:11: error: Poco/Timestamp.h: No such file or directory
#include "Poco/Timestamp.h"
^

I would rather use a vanilla solution, but I don't know were to start. Any direction either of a solution or something I should learn to figure out how to do it, will be greatly appreciated.

best regards

Here goes the Processing code:

import processing.video.*;
Capture video;
int capW = 640;  //match camera resolution here
int capH = 480;
float yoff = 50.0;  // 2nd dimension of perlin noise
float delayTime;

int nDelayFrames = 100; // about 3 seconds
int currentFrame = nDelayFrames-1;
int currentFrame2;
int currentFrame3;

int numPixels;
int[] previousFrame;

PImage frames[];
PImage framesHV[];
PImage framesV[];
PImage videoFlipH;

void setup() {
  size(640, 480);  //set monitor size here
  frameRate(200);
  video = new Capture(this, capW, capH );
  video.start();
  frames = new PImage [nDelayFrames];
  framesHV = new PImage[nDelayFrames];
  framesV = new PImage[nDelayFrames];
  videoFlipH = new PImage(video.width, video.height);
  for (int i= 0; i<nDelayFrames; i++) {
    frames[i] = createImage(capW, capH, ARGB);
    framesHV[i] = createImage(capW, capH, ARGB);
    framesV[i] = createImage(capW, capH, ARGB);

    numPixels = video.width * video.height;
  // Create an array to store the previously captured frame
  previousFrame = new int[numPixels];
  loadPixels();
  }
}

void draw() {

  float delayTime = constrain(map(noise(yoff)*10, 1, 7, 1, 100), 1, 100);    // Option #2: 1D Noise
  yoff = (yoff+0.01) % nDelayFrames;
  nDelayFrames = int(delayTime);

  if (video.available()) {
    video.read();
    video.loadPixels(); // Make its pixels[] array available
    for (int loc = 0; loc < width*height; loc++) {
      color currColor = video.pixels[loc];
      color prevColor = previousFrame[loc];

      int currR = (currColor >> 16) & 0xFF;
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;

      // Compute the difference of the red, green, and blue values
      // summes and divide the colors to result black and white
      // look at the FrameDifferencing example if you want it in colors
      int newR = abs(int(currR+(currG+currB)/2)/2);
      int newG = abs(int(currG+(currR+currB)/2)/2);
      int newB = abs(int(currB+(currG+currR)/2)/2);
      // mantain the values of the colors between 0 and 255
      newR = newR < 0 ? 0 : newR > 255 ? 255 : newR;
      newG = newG < 0 ? 0 : newG > 255 ? 255 : newG;
      newB = newB < 0 ? 0 : newB > 255 ? 255 : newB;

      // Render the difference image to the screen
      video.pixels[loc] = 0xff000000 | (newR << 16) | (newG << 8) | newB;

      //previousFrame[loc] = lerpColor (previousFrame[loc], currColor, 0.1);
    }

    for (int x = 0; x < video.width; x++) {
      for (int y = 0; y < video.height; y++) {
        framesHV[currentFrame].pixels[y*(video.width) + x] = video.pixels[(video.height - 1 - y)*video.width+(video.width-(x+1))];
        framesV[currentFrame].pixels[y*(video.width) + x] = video.pixels[(video.height - 1 - y)*(video.width) + x];
        videoFlipH.pixels[y*video.width + x] = video.pixels[y*video.width+(video.width-(x+1))];
      }
    }
    arrayCopy (video.pixels, frames[currentFrame].pixels);  //, tempImage[currentFrame].pixels
    frames[currentFrame].updatePixels();
    framesHV[currentFrame].updatePixels();
    framesV[currentFrame].updatePixels();
    tint(255, 167);
    updatePixels();
    videoFlipH.updatePixels();
    currentFrame = (currentFrame-1 + nDelayFrames) % nDelayFrames;
    currentFrame2 = (currentFrame +30)%nDelayFrames;  //+30= delay time. must be less than nDelayFrames
    currentFrame3 = (currentFrame +60)%nDelayFrames;  //+60= delay time. must be less than nDelayFrames

      image (framesHV[currentFrame], 0, 0, width, height);
    blend(frames[currentFrame2], 0, 0, width, height, 0, 0, width, height, OVERLAY);  //try with ADD, DARKEST etc here. see blend help
    blend(framesV[currentFrame3], 0, 0, width, height, 0, 0, width, height, SOFT_LIGHT);  //try with ADD, DARKEST etc here. see blend help
    blend(videoFlipH, 0, 0, width, height, 0, 0, width, height, LIGHTEST);  //try with ADD, DARKEST etc here. see blend help
  }
 //   println(nDelayFrames);
  println(int(frameRate));
}

Posts: 2

Participants: 2

Read full topic


Viewing all articles
Browse latest Browse all 4929

Trending Articles