Jump to content
  • 0

HLS: Resource usage in Houghline2


Testo

Question

Hi, my project wants to use houghline2 to track lines in image.

However, when I tried to sysnthesis it in HLS. The resource of this algorithm is so huge.

According to the report, it need to be at least XC7Z030 to operate it. I think there is something wrong here.

I tried to resize the input image size to be smaller. The resource is reduced but only BRAM_18K. The others still remain.

Do I program it wrong? Or it is general resource usage for houghline2.

houghResource.png

The resource usage from larger input size

houghResourceSmallerImage.png

The resource usage form smaller input size.

 

Code to synthesis

void doFilter(AXI_STREAMIN& video_in, int angle[MAXLINES], int rho[MAXLINES],int rows, int cols) {
    //Create AXI streaming interfaces for the core
#pragma HLS INTERFACE axis port=video_in bundle=INPUT_STREAM

#pragma HLS INTERFACE ap_memory port=angle
#pragma HLS INTERFACE ap_memory port=rho

#pragma HLS INTERFACE s_axilite port=rows bundle=CONTROL_BUS offset=0x14
#pragma HLS INTERFACE s_axilite port=cols bundle=CONTROL_BUS offset=0x1C
#pragma HLS INTERFACE s_axilite port=return bundle=CONTROL_BUS

	RGB_IMAGE img_0(rows, cols);
	RGB_IMAGE img_1(rows, cols);
	IMAGE_C1 H(rows, cols);
	IMAGE_C1 S(rows, cols);
	IMAGE_C1 V(rows, cols);
	IMAGE_C1 HMAX(rows, cols);
	IMAGE_C1 SMAX(rows, cols);
	IMAGE_C1 VMAX(rows, cols);
	IMAGE_C1 HMM(rows, cols);
	IMAGE_C1 SMM(rows, cols);
	IMAGE_C1 VMM(rows, cols);
	IMAGE_C1 HSMM(rows, cols);
	IMAGE_C1 inRange(rows, cols);

#pragma HLS dataflow

    hls::AXIvideo2Mat(video_in, img_0);
    hls::CvtColor<HLS_RGB2HSV>(img_0, img_1);				//hsv

    //In Range of color
    hls::Split(img_1, H, S, V);
    hls::Threshold(H, HMAX, maxHChar, 0, HLS_THRESH_TOZERO_INV);
    hls::Threshold(HMAX, HMM, minHChar, 255, HLS_THRESH_BINARY);
    hls::Threshold(S, SMAX, maxSChar, 0, HLS_THRESH_TOZERO_INV);
    hls::Threshold(SMAX, SMM, minSChar, 255, HLS_THRESH_BINARY);
    hls::Threshold(V, VMAX, maxVChar, 0, HLS_THRESH_TOZERO_INV);
    hls::Threshold(VMAX, VMM, minVChar, 255, HLS_THRESH_BINARY);

    hls::And(HMM, SMM, HSMM);
    hls::And(HSMM, VMM, inRange);

    hls::Polar_<int, int> lines[MAXLINES];
    hls::HoughLines2<THETA, RHO>(inRange, lines, HOUGHTHRESHOLD);

    for(int i = 0; i < MAXLINES; i++){
    	angle[i] = lines[i].angle;
    	rho[i] = lines[i].rho;
    }
}

Thank.

Link to comment
Share on other sites

1 answer to this question

Recommended Posts

@Testo,

I'm not much of an HLS user. 

But I will say this: there's some things that work well on an FPGA, and some things that work well within a CPU.

When operating on an image, people often think of operating on the entire image at once.  Few video streams require the resources necessary to hold an image in logic (not memory) so as to act on the entire image all at the same time.  Given that video streams typically have a frame rate on the order of 50-100Hz, it's usually not necessary.  Thus, in order to keep logic down, most video algorithms start at some starting address in memory at the top of the frame, and then read through the memory address, doing only a small amount of logic on each pixel.  This keeps them away from the massive resources required to hold the entire image in logic at once.

Hope this helps,

Dan

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...