cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
anuragkush
Contributor
Contributor
1,855 Views
Registered: ‎01-22-2018

How to write back to source image in Vivado HLS?

I am trying to implement a connected component labelling algorithm in Vivado HLS. Here is the link to the algorithm I am using: two pass connected component labelling algorithm.

I am using a 3x3 window in Vivado HLS and send that window to my connected object algorithm function. This function returns a single pixel according to algorithm and stores it to a destination image and appends successive pixel as they come. But while processing next window it ignores the result of the previous operation, while the algorithm requires it to compare with previous output pixel.

I need to find a way to consider that pixel or make changes in destination image itself.

Here is my code:

 

conn.cpp

#include "CONN.h"
int label=50;
int min;

MY_PIXEL find_conn(MY_WINDOW *window)
{


      unsigned char west, north, northWest, northEast, south, east, southEast, southWest=0;


    MY_PIXEL pixel;

      char i=0;
      char j=0;





//            pixel.val[0]=window->getval(i,j);     //to make copy of original image.

               west=window->getval(i+1,j);
               northWest=window->getval(i,j);
               north=window->getval(i,j+1);
               northEast=window->getval(i,j+2);

               if(window->getval(i+1,j+1)!=0){

                   min=9600;

                   if(west!=0||north!=0||northWest!=0||northEast!=0){

                       if(west<min && west!=0)          min=west;
                       if(northWest<min && north!=0)    min=northWest;
                       if(north<min && north!=0)        min=north;
                       if(northEast!=0 && northEast!=0) min=northEast;

                       window->insert(min,i+1,j+1);
                   }
                   else
                   {
                       label= label+10;
                       window->insert(label,i+1,j+1);
                   }


                   }



          pixel.val[0]=window->getval(i+1,j+1);


      return pixel;
}






void created_window(MY_IMAGE& src, MY_IMAGE& dst, int rows, int cols)
{
  MY_BUFFER buff_A;
  MY_WINDOW WINDOW_3x3;

  for(int row = 0; row < rows+1; row++){
    for(int col = 0; col < cols+1; col++){
#pragma HLS loop_flatten off
#pragma HLS dependence variable=&buff_A false
#pragma HLS PIPELINE II = 1

      // Temp values are used to reduce the number of memory reads
      unsigned char temp;
      MY_PIXEL tempx;

      //Line Buffer fill
      if(col < cols){
          buff_A.shift_down(col);
          temp = buff_A.getval(0,col);
      }

      //There is an offset to accommodate the active pixel region
      //There are only MAX_WIDTH and MAX_HEIGHT valid pixels in the image
      if(col < cols && row < rows){
          MY_PIXEL new_pix;
          src >> new_pix;
          tempx = new_pix;
          buff_A.insert_bottom(tempx.val[0],col);
      }

      //Shift the processing window to make room for the new column
      WINDOW_3x3.shift_right();

      //The processing window only needs to store luminance values
      //rgb2y function computes the luminance from the color pixel
      if(col < cols){
          WINDOW_3x3.insert(buff_A.getval(2,col),2,0);
          WINDOW_3x3.insert(temp,1,0);
          WINDOW_3x3.insert(tempx.val[0],0,0);
      }
      MY_PIXEL conn_obj;


      //The operator only works on the inner part of the image
      //This design assumes there are no connected objects on the boundary of the image


          conn_obj = find_conn(&WINDOW_3x3);


      //The output image is offset from the input to account for the line buffer
      if(row > 0 && col > 0) {
          dst << conn_obj;
      }
    }
  }
}






void create_window(AXI_STREAM& video_in, AXI_STREAM& video_out, int rows, int cols)
{
    //Create AXI streaming interfaces for the core
#pragma HLS INTERFACE axis port=video_in bundle=INPUT_STREAM
#pragma HLS INTERFACE axis port=video_out bundle=OUTPUT_STREAM

#pragma HLS INTERFACE s_axilite port=rows bundle=CONTROL_BUS offset=0x14
#pragma HLS INTERFACE s_axilite port=cols bundle=CONTROL_BUS offset=0x1C
#pragma HLS INTERFACE s_axilite port=return bundle=CONTROL_BUS

#pragma HLS INTERFACE ap_stable port=rows
#pragma HLS INTERFACE ap_stable port=cols


    MY_IMAGE img_0(rows, cols);
    MY_IMAGE img_1(rows, cols);

#pragma HLS dataflow
    hls::AXIvideo2Mat(video_in, img_0);
    created_window(img_0, img_1, rows, cols);
    hls::Mat2AXIvideo(img_0, video_out);
}

conn.h

#ifndef _TOP_H_
#define _TOP_H_

#include "hls_video.h"


#define MAX_WIDTH  320
#define MAX_HEIGHT 240

typedef hls::stream<ap_axiu<8,1,1,1> >               AXI_STREAM;
typedef hls::Scalar<1, unsigned char>                 MY_PIXEL;
typedef hls::Mat<MAX_HEIGHT, MAX_WIDTH, HLS_8UC1>     MY_IMAGE;

typedef hls::Window<3, 3, unsigned char>              MY_WINDOW;
typedef hls::LineBuffer<3, MAX_WIDTH, unsigned char>  MY_BUFFER;

void create_window(AXI_STREAM& INPUT_STREAM, AXI_STREAM& OUTPUT_STREAM, int rows, int cols);

#endif
0 Kudos
2 Replies
u4223374
Advisor
Advisor
1,834 Views
Registered: ‎04-26-2015

You can't do it with AXI Streams. The only way to modify an image on-the-fly (and use the modified image data) is to store the image in RAM and read/write it from there directly. If the image is sufficiently small, you can do it in block RAM. Otherwise, it'll have to be off-chip using an AXI Master - which will be slow.

 

It might be better to look at labelling algorithms that are more suitable for streaming images.

tedbooth
Scholar
Scholar
1,781 Views
Registered: ‎03-28-2016

I can see two options:

1) Define a static variable in your "find_conn" function to hold the current pixel value so that it can be used in the next round of processing.

 

2) Add an additional parameter to your "find_conn" function for the previous pixel value.  In your "created_window" function, hold onto the pixel value that is returned from "find_conn" and pass it into "find_conn" as the previous pixel on the next iteration of the loop.

 

I would lean toward option 2.

Ted Booth | Tech. Lead FPGA Design Engineer | DesignLinx Solutions
https://www.designlinxhs.com
0 Kudos