This article refers to the address: http://
Abstract: This paper proposes a design method of portable video terminal hardware equipment based on Xscale chip. Firstly, it introduces the characteristics of portable video terminal and the OpenGL technology and standard developed on this basis. On this basis, the implementation scheme of Xscale chip is proposed, and the functional characteristics of Xscale chip are introduced in detail. Finally, the key techniques of video image processing are appropriately improved to be more suitable for implementation in Xscale chip, and the implementation result analysis is given.Key words: Xscale; OpenGL; portable device; hardware interface
1 Introduction
In today's information society, the influence and effect of the development and application of information technology and information industry characterized by multimedia on human society is becoming more and more obvious and more and more important. Multimedia can change the packaging of information, realize the digitization of knowledge information, and provide great convenience and endless fun for people to obtain knowledge information. Friendly human-machine interface, a variety of multimedia teaching software, attractive electronic entertainment programs, immersive multimedia shopping guide system, a variety of information appliances and efficient and convenient online inquiries, etc., all infiltrate the role of multimedia, It leads people into a world of sound and color. At the same time, the development and application of multimedia has greatly promoted the mutual penetration and rapid development of all walks of life, greatly changing the working environment and lifestyle of human society. It is no exaggeration to say that the formation and development of the multimedia industry has not only caused a revolution in the computer industry, but is also profoundly affecting the tremendous changes in human society.
The emerging global market for portable image processing and display terminals is spurring a huge demand for the next generation of handheld mobile devices with complex image rendering TOR. At the same time, it also indicates the technical challenges of the application of image processing terminal equipment in portable devices, and also brings market opportunities to developers. Only two years ago, mobile devices like mobile phones only provided basic image processing as an add-on feature, just as mobile phones that sent text messages appeared a few years ago. Users quickly began to want to implement more sophisticated image functions on their handheld mobile devices. Handheld mobile device manufacturers are also taking advantage of advanced image rendering features, more sophisticated and interactive devices such as high-performance gaming terminals and real-time video surveillance devices. With the improvement of performance, China's development of image and video handheld mobile devices is faster than any other place in the world. Especially in industrial control, unmanned monitoring, robotics, etc., there is a huge market demand.
The real technology race has started with the second generation of handheld mobile devices with image rendering capabilities. Vendors will have unprecedented competition in pure technical performance, especially once the API standards are fully established, there will be little difference. Some believe that manufacturers should avoid expanding their platforms too quickly and making some of their proprietary extensions, and the industry must ensure that OpenGL ES standard APIs evolve with the market. In fact, the OpenGL ES roadmap has been established and people have developed OpenGL ES2.0. While the current API is based on state machines, it must evolve into a Shader-based (shading engine) standard for third-generation handheld mobile devices. The API is now based on a fixed-function pipeline (Pipeline) that enables or disables certain features based on current rendering conditions, enabling manufacturers to build different end devices based on throughput, number of pixels, and similar features. With OpenGL ES 2.0, people can program certain elements in the drawing pipeline, allowing content developers to precisely define how to handle vertex or pixels. This not only provides suppliers with a larger feature set and performance innovation space and greater differentiation, especially in terms of visual quality and performance, but also maintains a common platform for developers.
2 OpenGL data processing flow
OpenGL is a software interface to graphics hardware. Its main function is to draw 2D or 3D objects into a frame buffer. Objects are described as a series of vertices (used to define children) or pixels (to define an image). OpenGL processes the data in several steps to convert it into pixels, which will form the final desired graphics in the frame buffer. It consists of two main parts: the OpenGL Foundation, which explains the basic OpenGL concepts, such as what is a geometry and how OpenGL implements the client-server execution mode; basic OpenGL operations are illustrated by a high-level module diagram. The process by which OpenGL processes data in a frame buffer and generates corresponding images.
Many OpenGL commands directly affect the drawing of OpenGL objects such as points, lines, polygons, and bitmaps. Other commands, such as those used for reverse walking or texture operations, are mainly used to control how images are generated. There are also some commands that focus on the operation of the framebuffer. Figure 1 is a more detailed OpenGL processing flow chart. As you can see from the figure, there are three sets of arrows that cross most of the stages. These three sets of arrows represent the vertices and the two main data types associated with them—color values ​​and texture coordinates. It is worth noting that the vertices are first combined into primitives, then fragments, and finally become pixels in the framebuffer. The effect of an OpenGL command will depend to a large extent on whether a particular mode is valid. For example, lighting-related commands can only effectively generate an appropriate lighting object when the lighting function is enabled; if you want to start a specific mode, you can call the glEnalbe() command and provide an appropriate constant to determine the mode. (eg GL_LIGHTING). Call glDisable() to turn off a mode.
3 based on Xscale implementation and optimization
The Xscale core is a processor based on the ARMV5TE architecture and is an upgraded version of Intel's StrongARM. It has high performance, low power consumption, etc. However, it is a component of ASSP (Application Specific Standard Productor) in the form of a core. The PXA270, PXA250 and PXA210 application processors are ASSPs designed as portable devices. The first application processor using the Xscale core is Intel's 80200, which is used as an I/O application. Figure 2 is a structural diagram of the system structure of the Xscale microstructure.
Figure 2: System structure characteristics of the Xscale microstructure
Like the StrongARM, the Xscale core still uses the ARM architecture, so the processor structure is basically the same as the ARM processor structure. And there are great improvements in pipeline design, DSP processing and instruction design. The Xscale Super Pipeline consists of a mainstream waterline, a memory pipeline, and a MAC pipeline.
Among them, the mainstream water line is composed of 7-stage pipelines such as F1/F2, ID, RF, X1, X2 and XWB: F1/F2 is a 2-level instruction extraction, ID is instruction decoding, RF is register file/operand shift, X1 Executed for the ALU, X2 is state execution and XWB is writeback.
The FI/F2 instruction takes the stage, and Xseale arranges the Fl/F2 two-stage pipeline in order to facilitate the dynamic prediction of the branch instruction. The branch target buffer BTB and the instruction fetch unit IFU (Instruction Fetch Unit) operate in this 2-stage pipeline. The ID instruction decode stage performs general instruction decoding; detects undefined instruction wells to generate exceptions; and dynamically expands complex instructions into a series of simple instructions, such as LDM, STM, and SWP instructions. The RF register file shifts the level, which mainly performs register read and write. For the ARM architecture processor, the shift operation is also performed in the second half of the stage. This level will provide the relevant data source for ALU execution, MAC operations, memory writes, and coprocessor interfaces. XI Execution level, in which ALU calculations, conditional instruction execution, and branch target determination are performed. The X2 execution stage, which includes the output of the ALU, selects which ones to write back to the next level (XWB), and the program status register PSR operations. XWB writes back to the level and reaches the level, which is written back to the register file part RFU. Data correlation issues arise in this pipeline operation, and Xscale uses bypass technology to reduce pipeline stalls.
The Intel Xscale kernel, like StrongARM, has instructions that are conditionally executed. Xscale can modify the condition code to optimize the instruction. The optimization mainly proceeds from the following aspects: First, the condition checking is optimized. The Xscale kernel can selectively modify the state of the condition code. If an if-else and a loop loop are encountered, the use of the comparison instruction is reduced. Secondly, the transfer structure is optimized, and the transfer will reduce the efficiency of the pipeline, and the prediction of the transfer will improve the efficiency of use. The number of branch predictions is limited by the number of transfer buffers because the number of predicted branch instructions in the program is much larger than the number of transfer buffers, so reducing the branch instructions can help optimize. Furthermore, complex expressions such as logic instructions will reduce the efficiency of the instructions and can be implemented using instructions with conditional codes. Finally, the use optimization of the multiplication and division of immediate and integer. The Xscale kernel has to use MOV or MVN instructions when loading immediate registers into registers. It can also be used with the ORR, BIC, and ADD instructions to set a set of constants.
4 experimental results and analysis
For the color image segmentation method, it can be completed by dividing the pixels in the color space, or by spatially dividing the pixels, wherein the method based on the spatial information in the image can also be divided into two types, one is to utilize Image segmentation is performed on edges between image regions (pixels). In order to obtain edge information, people generally use Sobel, Laplacian, Canny, etc. for edge detection. However, when there is noise in the image, the edges obtained by the operator are often isolated or divided into small segments, and even if the edge closure method is used for processing, it is difficult to obtain the precise edge of the region. The other is to use the adjacency and similarity between regions (pixels) for region growth and region merging. Regional growth can be seen as a special case of regional mergers. The key issue for regional mergers is to develop reasonable rules for merging and merging. After determining the seed region, Jseg adopted the global optimization rule for region growth, and then used threshold-based region merging to complete image segmentation. K.Haris uses the watermark segmentation algorithm to complete the initial segmentation of the image, and then uses a fast region merging algorithm to gradually merge the regions with the closest color distances, and stops the merge when the region in the image reaches the set number.
Recent studies have shown that the fusion of multiple information is more conducive to achieving a reasonable segmentation effect. That is to say, it is necessary to consider the optimized classification of pixels in the feature space such as color, and also to consider spatial information such as edges and adjacencies between regions (pixels) in the image. Milan Sonka proved by theory and experiment that the segmentation algorithm using color and edge information can obtain more reasonable segmentation results than the method of using only color information or edge information.
This paper combines the color and spatial information of the image and proposes a new video image segmentation method. After the algorithm completes the color quantization, an initial segmentation of the acquired still video image is formed by an incremental region growing algorithm. At this time, the segmentation is basically the division of pixels in the color space. Then, the color information of the fusion region, the edge of the space and the adjacency relationship information are defined, and the regional distance is defined, and the hierarchical region is merged according to the regional distance.
In view of the fact that there is no uniform evaluation standard for current image and video quality, we apply the commonly used subjective visual evaluation method. Through the segmentation experiment and experimental analysis of some images, the segmentation algorithm proposed in this paper has a good segmentation effect for video images without significant texture. The incremental region growth helps to find more image details and stop region merging. The rules are also very effective. The segmentation result can be used for region-based image retrieval, object-based image content analysis, and the like. In the future work, the texture features of the image can be incorporated into the algorithm, and the image quality will be further improved.
5 innovation points summary
The innovation of this paper is to propose an interface design scheme of portable video terminal equipment based on Xscale, and design a new video image segmentation method. Through experimental verification, the expected design goal is achieved. But so far, not only does not have a way to give good segmentation results for all images, and there is no way to give a consistent evaluation of the segmentation results obtained by different methods, nor a theory to guide How do we choose the appropriate segmentation method for different images? Because the research of image segmentation still lacks a unified theory, we often rely on our own knowledge and experience when solving some practical image segmentation problems. All of this not only limits the development of image analysis and understanding research, but also limits the application of machine vision technology in industrial and agricultural production.
references:
1 Ma Zhongmei Ma Guangyun et al. ARM embedded processor and application basis [M]. Beijing University of Aeronautics and Astronautics Press, 2002
2 Chen Zhanglong Tang Zhiqiang Tu Shiliang. Embedded Technology and System——Intel Xscale Structure and Development [M]. Beijing University of Aeronautics and Astronautics Press, 2004
3 Ye Qixiang Gao Wen Wang Weiqiang Huang Tiejun. A Color Image Segmentation Algorithm Combining Color and Spatial Information[J]. Institute of Computing Technology, Chinese Academy of Sciences, 2005
4 Xie Jianping PDP TV multi-standard digital video signal conversion circuit research [J]. Microcomputer Information, 2006, 7-2: p222-224
The PA system high quality Portable Speaker Box has fashionable shape, lound sound,soft plastic handle, can be applied to different occasions, such as supermarket, classroom, etc. Extensive protection from short-circuit, overload and high temp is ensured. Cooling system ensures the extreme quite sound system.
Wheeled Trolley speaker,Portable Sound Box,Wireless Portable Speaker,Trolley speaker box,Portable Bluetooth Speakers
Taixing Minsheng Electronic Co.,Ltd. , https://www.msloudspeaker.com