FPGA Part #2

Switched back to FPGA (ULX3S).

The sampling seemed to work quite well. I’ve been trying to visualize the image using a the GPDI output but the I can’t figure out how to sync the pixel clock (15.75mHZ) and the VGA clock rate (25mHZ). This being said, the data come-in just fine (I think?).

Next step is just to save a whole frame on a SD card and test with the LUA image converter.

Week 6

Decided to do a different kind of experimentation this week. I wondered if it would be viable to use S-Video to do tapeless capture and skip the mini-dv encoding. The result aren’t bad, both video have different kind of ”issues”. Obviously the quality will be way worse then doing raw capture from the sensors.

The key here is to record the S-Video in realtime. Some people have argue that S-Video is worse, but that’s because they record the tape and not the live feed. Mini-DC on the left, S-Video is on the right.

Not how the cable coming out of the keyboard, the texture on the desk and the text on the raspberry pi (green circuit) are all clearer and sharper. There is some nice details on the hand in Mini-DV.

https://vimeo.com/717573940

I’m currently investigating the ADV7280 family of chip, which can sample video and output data via the MIPI protocol, available on raspberry pi. I’ve started a page which a bunch of link here ‣. It seems like there is already a driver for the chip in the raspi kernel, which is nice.

Did some additional work on the Teensy+Lua script to grab that directly from the sensor. I’m surprised by the performance, I can grave 30fps almost in real time. I could probably re-code the Lua script in C if I need more performance, but this will at least allow me to reconstruct images from the data capture by the Teensy.