Jump to content

Recommended Posts

Hi there, i a newbie. I hope i am asking the right questions.
I have been trying to figure out what kind of hardware/software is needed for my project. The project is getting the images with a high speed image sensor with a frame rate of 500-1000fps at a lower resolutions such as 160x120 or 480x360 pixels. The frames will be read or saved with FPGA SoC board. I know that I have an option of given interfaces:
Option 1: Board level image sensor with FFC Cable (LVDS or MIPI CSI-2 interface)
Option 2: Boxed camera with USB3.0 Cable interface

I would love to think about the option 2.
The Basler acA640750um USB 3.0 camera with ON Semiconductor PYTHON 300 CMOS sensor.
This camera has a frame rate of 751 fps at VGA resolution. So if I have a FPGA board with a USB3.0 this option would be okay for plug and go ?

I hope someone will lead me and help me.

Regards,
Sirac

Link to comment
Share on other sites

Hi,
Just thinking aloud: couldn't you use an embedded / industrial PC as platform, prototyping in some convenient language and possibly using the graphics card for fast algorithms (openCL or CUDA)?

This...
>> i a newbie
... sets off some alarm bells in my head - you wouldn't be the first one to dramatically underestimate the required effort unless sample code does all the work. Maybe I'm too pessimistic, but for example, what lies between the hypothetical FPGA board's USB 3.0 host and your application? A camera driver? Will I write that myself?
 

Link to comment
Share on other sites

Hi @pollutioncontroltech

I agree with the @xc6lx45 statement about underestimating the task. As a newbie you can see only parts that are covered by your experience.The concept you described in my view would require substantial knowledge and highly qualified FPGA professional.

Now regarding to your options I'd like to say that answering requires some research to substantiate either choices. I think that the key factor is the bandwidth. You need to construct a block diagram of your system (may be several) and estimate required I/O data rates for each particular module. You also need to check whether required IP is available or you will need to write your own code. After making these estimates you will be able to see bottlenecks and critical points of your concept.

Personally. I would prefer to avoid USB interface because of IP complexity, ARM processors' and RAM latencies. The best would be using Gigabit Transceivers for interfacing with the video source.

Good luck!

Link to comment
Share on other sites

Thank you so much for both (@Notarobot, @xc6lx45) your responses, and i am sorry that i missed an "am" when wrote my post. But i can assure you both that i am not underestimating the work that has to be done to do the task. I just don't know / not sure where to start and which way is less trouble. As i said i am in beginner level of this all and i would be so grateful if one can lead/show/help me on my way. 

I want to build this project and willing to learn and dig in. So please help me on this subject. I am researching about how to physically connect the image sensor with FPGA board, and i am having a hard time to see any examples/demos or etc, that is why i still didn't order the FPGA board because i don't know what is the best match.

 

Link to comment
Share on other sites

Hi again (@Notarobot, @xc6lx45),
While I was searching online I came across this project/kit (https://www.xilinx.com/support/documentation/application_notes/xapp794-1080p60-camera.pdf) from Xilinx/Avnet. It has an Avnet FMC-IMAGEON (FMC daughter board) with VITA-2000 (Phyton-1300 C as option too) camera/image sensor with Xilinx ZC702 board. It would read the images at 60fps with 1080p resolution, and then do the other tasks ( display the images etc.)


I am basically looking for something like this, but I will be needing more fps at lower resolutions. I know that the necessary arrangements in codes/IP’s shall be made. Other than that, I have these questions popped up in my mind:

Q1: Can this example project do 500+fps with lower resolution (160x120pixels)?
Q2: If not, what is the bottleneck and is there a way to fix it?
Q3: If the bottleneck is the image sensor, can it be replaced with any other, do you sell different embedded vision kits that capable of doing my project?
Q4: If the bottleneck is the FMC daughter board, do you have a better one or do you have any suggestions?
Q5: If the bottleneck is interface, what do you suggest?
Q6: Is there any other way to connect the image sensor to an FPGA, if so would lead/help me to find out?

Link to comment
Share on other sites

Hi @pollutioncontroltech,

Unless Notarobot and xc6lx45 have worked with this specific project (which I suspect they have not) they would not know if the project you linked to would be able to run at alternate resolutions at higher frame rates; that'll be a question for the creators of that particular demo. Glancing through the documentation, it looks like it was also a paid IP core that ran this demo, so the likely-hood that some users on this Forum have paid for this IP core (Digilent has not) and are able to readily give out advice is not so high. A look at the datasheet for the camera used in that demo shows that the image sensor is not the bottleneck in this situation, so likely it will be way the IP project has been designed and the serial communication associated with the project. There are likely other ways to connect the image sensor to the FPGA, but with high speed devices, you will inevitably run into large pin counts (the camera has 52 pins) and tight requirements on the routing which all tend to need some level of specifically designed/customized work.

In terms of image applications that Digilent has done, we have only have the Pcam 5C and a (now obsolete) Vmod Cam, but neither of those applications went above 60 fps.

Did you take a look at this thread that Jon linked you to for other image options?

Thanks,
JColvin

Link to comment
Share on other sites

Dear @pollutioncontroltech

First of all let me say that I admire your determination.

Since all your questions are very general answering them would require either research, or experience in similar projects. Most of professionals are doing paid work and can't share results of commercial development. It is unreasonable to expect that somebody will put time doing the job that you are supposed to do or share key details of commercial product.

Here is my advice:

1. Try to understand what data processing your project needs, every step, data formats, data rates, etc. Then construct a block diagram of your system. You will understand where are bottlenecks in your concept.

2. Find commercial product similar to Avnet (they have a variaty of options) and reading the documentation try to understand how it works and what determined their choices. You will see how to make it work for you.

3. When achieve some clarity ask questions because at this time, I am afraid, you hope that putting various components together wil do the trick.

I hope you have experience in HDL, EE design and a lot of time. Anyway you will have fun.

Good luck!

Link to comment
Share on other sites

@pollutioncontroltech,

You've given the image size and the frame rate, but what would be the underlying pixel clock and the number of bits per pixel?

I know I've used the Nexys Video board successfully at 1920x1280 and 60 fps.  This creates an underlying pixel clock of 148.5MHz, and doesn't overload the memory (until you want to read as well ...)

Just multiplying out 480*360*1000 suggests a pixel clock of 172MHz.  That'd be a bit faster, so ... I'm not quite certain how close you are to the wall yet to know if that's not doable.  Certainly at 500fps that should be doable.  Of course, by doable I mean within the bounds of what the FPGA can process and what the memory can handle, not whether or not you can properly manage the I/O.

Dan

Link to comment
Share on other sites

1 hour ago, D@n said:

Just multiplying out 480*360*1000 suggests a pixel clock of 172MHz.

Hmm.... doing a bit of math before setting out on a technically difficult project is never a bad idea. Nor is doing some preliminary tests on the intended platform. The first thing that I'd do is calculate the raw data rate from a particular sensor; obviously color sensors will have a higher data rate than a monochrome sensors. If you determine that your FPGA SoC platform has a interface that can handle this the next question is "what am I going to do with the data? You indicate that you want to save the data ( I'm assuming for post-processing ). So how many frames do you intend to store? How are you going to format the data? Where to you intend to store that data? Does the data need to go off-board?  Obviously, one can design a custom FPGA board to handle any image sensor but things get more complicated when trying to shoehorn an aggressive design into a general purpose platform.

 

On 2/25/2018 at 7:53 PM, pollutioncontroltech said:

The Basler acA640750um USB 3.0 camera with ON Semiconductor PYTHON 300 CMOS sensor.
This camera has a frame rate of 751 fps at VGA resolution. So if I have a FPGA board with a USB3.0 this option would be okay for plug and go ?

I don't know of an FPGA board that has a USB3.0 host controller interface. Actually, I don't know of an inexpensive FPGA board with any USB3.0 interface. But you need to have a working knowledge of the USB architecture and protocol before choosing parts. All of the MIPI interface boards that I know of for use with an FPGA board come with a sensor.

Nvidia has some GPU SoC SOM development kits that might be of interest but support is not great.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...