An SOPC Based Image Processing
Huang Qiang,He Fei,Wu YiBo,Ji Zhen
(Shenzhen University)
Abstract:Recent advances in semiconductor technology have made it possible to integrate an entire system including processors,memory and other system units into a single programmable chip -FPGA,these configurations are called “System-on-a- Programmable-Chip”(SOPC).SOPCs have the advantage that they can be designed quicker than existing technologies and are cheap to produce for lowVolume(<10,000)applications.Also,SOPCs are of great benefit as they offer compact and flexible system designs due to their reconfigurable nature and high integration of features.One processor intensive application,which is ideal for SOPC technology,is that of image processing where there is a repeated application of operations on the 2D data.This research investigated the use of SOPC technology for image processing by developing a modular system capable of real-timeVideo acquisition,processing and display.
The system produced is an alternative to conventional desktop-based,i.e.aVisionbased closed loop process control system for welding,or microprocessor-basedVision systems.
Keywords:SOPC,Image Processing,FPGA
1 The Image Processing
1.1 Overview of Image Processing
Image processing is the collation of spatially arranged intensity data,forming an image,which is processed to extract information about the scene .
The input of image processing is an image such as a frame ofVideo,while the output could be an image or a set of features of the image.Image processing is rather independent of an application domain.However,it plays an essential role in computerVision systems such as aVision based robot control system because a robust image processing algorithm is required.Because of this,a real time image processing system is often required to be set up toVerify specific image processing algorithms and estimate its performance in a real-time manner before being applied into a complete(computer)vision system to perform particularVision task.The real time goal is to process all the required data in a given time interval before the next image is ready for processing.To estimate the performance of a real-time image processing system it is required to analyse how much data it can handle in real time.Such a system generally provides three main functions which areVideo acquisition,processing and output(see Figure 1-1),and theVideo data transfer in this system is one way.TheVideo data can be acquired by an analogue/digital camera or aVideo recording device.As described in Chapter 1,the image processing can be performed by using general processors such as CPU,general purpose microprocessor,DSP processor,synthesised processor running on a FPGA,or processing elements built into FPGAs or ASICs.TheVideo output generally refers toVideo display on monitors which gives aVisual indication of the image processing results.Furthermore,a data storage function is normally required to buffer the data output from each level before being sent to the next due to the existence of speed differences between each function module.
Figure 1-1 Common steps in real-time image processing system
1.2Vision-Based Closed Loop Process Control System for Welding
TheVision-based closed loop process control system was designed to improve the weld quality.In this system,Figure- analysis and processing are applied to the images captured by the camera for supplying real-time measurements on the weld,in order to provide additional information to the process operator .The description of this system,in the next several sections,focuses on the image acquisition and processing parts as the author has been involved in developing the image processing software,and more importantly its application to the research described in this thesis.More details regarding the post processing and how the whole system functions can be obtained in.Figure 1-2 illustrates a simplified diagram of the closed loop weld process control system.
Figure 1-2 Closed loop weld process control
2 The Nios Integrated Real-time Image Processing System-Hardware
Based on SOPC technology,a Nios integrated real-time image processing system was developed and evaluated on the Nios development kit.In the next two chapters,a detailed description of this system is presented with emphasis on the hardware & soft system core.This chapter focuses on describing the hardware of this system.
2.1 Overview of the System Hardware Architecture
Following the guidelines described in Chapter 3 for implementing a real-time Figure- processing system based on SOPC,the hardware architecture of SIPS was designed as shown in Figure 2-01.
Figure 2-1 Block diagram of the hardware architecture of SIPS
2.2 Camera Interface Card(Custom Designed)
Signals from the CameraLink camera are a number of pairs of Low-voltage differential signalling(LVDS)data which contain theVideo data and timing signals.The Camera interface card,which was designed with Cadstar by the author and manufactured by PCB Train,mainly consists of the LVDS receivers and transmitters to convert the LVDS data streams into parallel MultiVolt I/O data which the FPGA can accept.
It also converts the control and configuration data driven from the FPGA into pairs of LVDS signals to either trigger the camera or configure the working mode of the camera.
Figure2-2 Block diagram of the camera interface card
The main components on this interface card are a 3M™ Mini D Ribbon(MDR)
connector ,a 28-bit LVDS receiver modelled DS90CR286,an LVDS quad CMOS differential line receiver modeled DS90C032 ,an LVDS quad CMOS differential line driver modeled DS90C031 and some decoupling capacitors and resistors.Appendix A shows the schematics of this interface card.Two PCB diagrams of it are shown in Appendix B and the pin assignments shown in Appendix C.There are 3 headers on the board to allow this interface card to plug in the Nios development board.
3 The Nios Integrated Real-time Image Processing System-Soft System Core
3.1 Overview of the System Core Architecture
In last Chapter,a detailed hardware description of this image processing system was presented.As described,the system core of this image processing system was synthesised and evaluated on the Altera’s programmable device Apex 20K200E.This chapter therefore focuses on describing this soft system core in details.Based on the system level design methodology and the Nios processor system architecture described
3.2Video Memory Controller
ThisVideo memory controller is used to drive the SDRAM device to buffer both rawVideo data which came from theVideo camera and the processedVideo data to be displayed.In order to ensure a lossless transfer,multiple bank operation is implemented.
This section,firstly introduces the main features of this memory controller,and then it describes the details of this memory controller.Finally a brief summary is given.
ThisVideo memory controller was designed to program the memory device to perform page read,page write,mode register set and internal auto-refresh operations.The SDRAM is controlled by bus commands .There are several command functions such as Bank Activate command,Bank Precharge command,Precharge All command,Write/Read command,Burst Stop command,Mode Register Set command .However,it would slow down the system operations if all of these commands were inilialised by a separate Avalon transfer.Therefore thisVideo memory controller has simplified this and only supports Avalon streaming read,Avalon streaming write &
Avalon mode register set operations.These operations are fulfilled on the SDRAM device by implementing a sequence of SDRAM commands.For example,when an Avalon streaming read request is initialised,the memory controller firstly sends out a Bank Activate command to open a specified row in a specified bank,and then a Page Read command is issued to continuously read data from the specific row,finally this Full Page Read is terminated by a Precharge command and the whole operation completes.
There are three sub-modules of this soft core which are the Avalon interface,SDRAM controller and SDRAM data path module.The Avalon interface module handles the data transfers with the Avalon bus and decodes the address,to drive the SDRAM controller to generate proper control signals and data to the SDRAM device for reading and writing.The SDRAM controller generates the actual SDRAM commands to the memory device.The SDRAM data path block handles theVideo data multiplexing.
In order to ensure the system is working properly,simulations and hardware tests for individual IP component associated with the hardware interface and the whole system are essential.This chapter therefore firstly focuses on discussing the test scheme that has been undertaken for the main IP components and the whole system.It then describes the software development which includes the general issues and the implementation ofVarious image processing algorithms.A performance analysis is given for each of these image processing tests.Finally a summary is given for this chapter.
3.3 System Tests
Various tests have been undertaken for eachVideo IP component with its dedicated hardware interface.The units under test include theVideo memory,Cache,video display,video capture and the whole system.In this section,it describes how all these tests were undertaken in botsimulation and hardwareVerification and the test results will be discussed.Furthermore,a special test given for the SMMAST-PCW will also be presented.
3.4 Simulations
The simulation scheme and results presented in this section are for the SOPC top module,which includes allVideo IP components and the Nios processor,Avalon bus module and other Altera provided IPs such as the SRAM,Flash,and Timer.The SOPC Builder generates all simulation instances for simulations.The actual simulations were done by using the simulation tool-ModelSim.Details of setting up simulation for Nios
processor design can be obtained in .
3.5 Full System Simulations Results and Discussions
As this system is aimed for doing general image processing,in order to examine the accuracy of it,a Sobel edge detector was applied into this simulation example.Details of the Sobel edge detector algorithm will be explained in section This example utilised the Quad-bank operation.In phase 1,theVideo starts to be captured into bank 0,as there is noValid data in the other banks,the display controller sends out data withValues of zero.In phase 2,the operations in all banks are changed.Bank 0 starts to be processed and the processed data is put into bank 3.Video is continually being captured into bank 1.In phase 3,theVideo display moves to bank 3.This bank containsValid data which has just been processed.So in this phase,valid data is started to be sent to the display.Table 5-1 lists some pixel data used for simulation.These data are fed into the capture controller.So by comparing the processed data output on the display with the calculated results a judgement of the accuracy of image calculations and correctness of data transfer can be obtained.
Table 3-1 Full simulation data input sets
Table3-2 lists the calculated results of the input data shown in Table 8-1 based on the Sobel algorithm.For example,
The output data simply takes the lowest 2 bytes of the calculated results.This is different from the actual implementation of the Sobel edge detector.
Table 3-2 Estimated full simulation data output sets
Note:D/C-Don’t care
4 Conclusions
Vision based systems have been used in a wide range of applications.More and more applicants demand such systems with a compact size so that they can be placed close to applicants whilst offer high performance and the potential to be upgraded or optimized in future.Furthermore,commercial investors also urge to develop such a system quicker and more cost effective.Conversional desktop or embedded microprocessors basedVision systems become difficult to meet these demands as they are either ponderous,bulky,timing consuming to develop,lack flexibility or are difficult to upgrade and optimise.
References
[1] Muramatsu,S,Otsuka,Y,Takenaga,H,Kobayashi,Y,Furusawa,I,Monji,T,“Image processing device for automotiveVision systems”,IntelligentVehicle Symposium,2002 IEEE,17-21 June 2002,Volume 1,pp 121-126.
[2] J.Kang,R.Doraiswami,“Real-time Image Processing System for Endoscopic Applications”,Electrical and Computer Engineering,2003.IEEE CCECE 2003.Canada,-7 May 2003,Volume 3,pp 1469-1472.
[3] Arnold,J.M,Buell,D.A,Hoang,D.T,Pryor,D.V,Shirazi,N,Thistle,M.R,“The Splash 2 processor and applications”,Computer Design:VLSI in Computers and Processors,1993.ICCD '93.Proceedings 1993 IEEE International Conference,3-6 Oct 1993,pp 482-485.
[4] Ikenaga,T.,Ogura,T.,“A DTCNN universal machine based on highly parallel 2-D cellular automata CAM²”,Circuits and Systems I:Fundamental Theory and Applications,IEEE Transactions on,May 1998,Volume 45 Issue 5,pp 538-546.
[5] Raphaël Canals,Anthony Roussel,Jean-Luc Famechon,and Sylvie Treuillet,“A Biprocessor-OrientedVision-Based Target Tracking System”,Industrial Electronics,IEEE Transaction on,April 2002,Volume.49,Issue 2,pp 500-506.