http://www.sensorsportal.com/HTML/DIGEST/Submission.htm
SSeennssoorrss && TTrraannssdduucceerrss
IInntteerrnnaattiioonnaall OOffffiicciiaall JJoouurrnnaall ooff tthhee IInntteerrnnaattiioonnaall
FFrreeqquueennccyy SSeennssoorr AAssssoocciiaattiioonn ((IIFFSSAA)) DDeevvootteedd ttoo
RReesseeaarrcchh aanndd DDeevveellooppmmeenntt ooff SSeennssoorrss aanndd TTrraannssdduucceerrss
VVoolluummee 115599,, IIssssuuee 1111,, NNoovveemmbbeerr 22001133
Editor-in-Chief
Sergey Y. YURISH
IFSA Publishing: Barcelona Toronto
http://www.sensorsportal.com/
Copyright 2013 IFSA Publishing. All rights reserved.
This journal and the individual contributions in it are protected under copyright by IFSA Publishing, and the
following terms and conditions apply to their use:
Photocopying: Single photocopies of single articles may be made for personal use as allowed by national
copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying,
including multiple or systematic copyright, copyright for advertising or promotional purposes, resale, and all
forms of document delivery.
Derivative Works: Subscribers may reproduce tables of contents or prepare list of articles including abstract for
internal circulation within their institutions. Permission of the Publisher is required for resale or distribution
outside the institution.
Permission of the Publisher is required for all other derivative works, including compilations and translations.
Authors' copies of Sensors & Transducers journal and articles published in it are for personal use only.
Address permissions requests to: IFSA Publisher by e-mail: editor@sensorsportal.com
Notice: No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a
matter of products liability, negligence or otherwise, or from any use or operation of any methods, products,
instructions or ideas contained in the material herein.
Printed in the USA.
SSeennssoorrss && TTrraannssdduucceerrss
Volume 159, Issue 11
November 2013
www.sensorsportal.com
e-ISSN 1726-5479
ISSN 2306-8515
Editors-in-Chief: professor Sergey Y. Yurish,
Tel.: +34 696067716, e-mail: editor@sensorsportal.com
Editors for Western Europe
Meijer, Gerard C.M., Delft Univ. of Technology, The Netherlands
Ferrari, Vittorio, Universitá di Brescia, Italy
Mescheder, Ulrich, Univ. of Applied Sciences, Furtwangen, Germany
Editor for Eastern Europe
Sachenko, Anatoly, Ternopil National Economic University, Ukraine
Editors for North America
Katz, Evgeny, Clarkson University, USA
Datskos, Panos G., Oak Ridge National Laboratory, USA
Fabien, J. Josse, Marquette University, USA
Editor South America
Costa-Felix, Rodrigo, Inmetro, Brazil
Editors for Asia
Ohyama, Shinji, Tokyo Institute of Technology, Japan
Zhengbing, Hu, Huazhong Univ. of Science and Technol., China
Li, Gongfa, Wuhan Univ. of Science and Technology, China
Editor for Asia-Pacific
Mukhopadhyay, Subhas, Massey University, New Zealand
Editor for Africa
Maki K., Habib, American University in Cairo, Egypt
Editorial Board
Abdul Rahim, Ruzairi, Universiti Teknologi, Malaysia
Abramchuk, George, Measur. Tech. & Advanced Applications, Canada
Aluri, Geetha S., Globalfoundries, USA
Ascoli, Giorgio, George Mason University, USA
Atalay, Selcuk, Inonu University, Turkey
Atghiaee, Ahmad, University of Tehran, Iran
Augutis, Vygantas, Kaunas University of Technology, Lithuania
Ayesh, Aladdin, De Montfort University, UK
Baliga, Shankar, B., General Monitors, USA
Barlingay, Ravindra, Larsen & Toubro - Technology Services, India
Basu, Sukumar, Jadavpur University, India
Bousbia-Salah, Mounir, University of Annaba, Algeria
Bouvet, Marcel, University of Burgundy, France
Campanella, Luigi, University La Sapienza, Italy
Carvalho, Vitor, Minho University, Portugal
Changhai, Ru, Harbin Engineering University, China
Chen, Wei, Hefei University of Technology, China
Cheng-Ta, Chiang, National Chia-Yi University, Taiwan
Chung, Wen-Yaw, Chung Yuan Christian University, Taiwan
Cortes, Camilo A., Universidad Nacional de Colombia, Colombia
D'Amico, Arnaldo, Università di Tor Vergata, Italy
De Stefano, Luca, Institute for Microelectronics and Microsystem, Italy
Ding, Jianning, Changzhou University, China
Djordjevich, Alexandar, City University of Hong Kong, Hong Kong
Donato, Nicola, University of Messina, Italy
Dong, Feng, Tianjin University, China
Erkmen, Aydan M., Middle East Technical University, Turkey
Gaura, Elena, Coventry University, UK
Gole, James, Georgia Institute of Technology, USA
Gong, Hao, National University of Singapore, Singapore
Gonzalez de la Rosa, Juan Jose, University of Cadiz, Spain
Goswami, Amarjyoti, Kaziranga University, India
Guillet, Bruno, University of Caen, France
Hadjiloucas, Sillas, The University of Reading, UK
Hao, Shiying, Michigan State University, USA
Hui, David, University of New Orleans, USA
Jaffrezic-Renault, Nicole, Claude Bernard University Lyon 1, France
Jamil, Mohammad, Qatar University, Qatar
Kaniusas, Eugenijus, Vienna University of Technology, Austria
Kim, Min Young, Kyungpook National University, Korea
Kumar, Arun, University of Delaware, USA
Lay-Ekuakille, Aime, University of Lecce, Italy
Li, Si, GE Global Research Center, USA
Lin, Paul, Cleveland State University, USA
Liu, Aihua, Chinese Academy of Sciences, China
Liu, Chenglian, Long Yan University, China
Mahadi, Muhammad, University Tun Hussein Onn Malaysia, Malaysia
Mansor, Muhammad Naufal, University Malaysia Perlis, Malaysia
Marquez, Alfredo, Centro de Investigacion en Materiales Avanzados, Mexico
Mishra, Vivekanand, National Institute of Technology, India
Moghavvemi, Mahmoud, University of Malaya, Malaysia
Morello, Rosario, University "Mediterranea" of Reggio Calabria, Italy
Mulla, Imtiaz Sirajuddin, National Chemical Laboratory, Pune, India
Nabok, Aleksey, Sheffield Hallam University, UK
Neshkova, Milka, Bulgarian Academy of Sciences, Bulgaria
Passaro, Vittorio M. N., Politecnico di Bari, Italy
Patil, Devidas Ramrao, R. L. College, Parola, India
Penza, Michele, ENEA, Italy
Pereira, Jose Miguel, Instituto Politecnico de Setebal, Portugal
Pogacnik, Lea, University of Ljubljana, Slovenia
Pullini, Daniele, Centro Ricerche FIAT, Italy
Reig, Candid, University of Valencia, Spain
Restivo, Maria Teresa, University of Porto, Portugal
Rodríguez Martínez, Angel, Universidad Politécnica de Cataluña, Spain
Sadana, Ajit, University of Mississippi, USA
Sadeghian Marnani, Hamed, TU Delft, The Netherlands
Sapozhnikova, Ksenia, D. I. Mendeleyev Institute for Metrology, Russia
Singhal, Subodh Kumar, National Physical Laboratory, India
Shah, Kriyang, La Trobe University, Australia
Shi, Wendian, California Institute of Technology, USA
Shmaliy, Yuriy, Guanajuato University, Mexico
Song, Xu, An Yang Normal University, China
Srivastava, Arvind K., LightField, Corp, USA
Stefanescu, Dan Mihai, Romanian Measurement Society, Romania
Sumriddetchkajorn, Sarun, Nat. Electr. & Comp. Tech. Center, Thailand
Sun, Zhiqiang, Central South University, China
Sysoev, Victor, Saratov State Technical University, Russia
Thirunavukkarasu, I., Manipal University Karnataka, India
Thomas, Sadiq, Heriot Watt University, Edinburgh, UK
Tianxing, Chu, Research Center for Surveying & Mapping, Beijing, China
Vazquez, Carmen, Universidad Carlos III Madrid, Spain
Wang, Jiangping, Xian Shiyou University, China
Xu, Han, Measurement Specialties, Inc., USA
Xu, Weihe, Brookhaven National Lab, USA
Xue, Ning, Agiltron, Inc., USA
Yang, Dongfang, National Research Council, Canada
Yang, Shuang-Hua, Loughborough University, UK
Yaping Dan, Harvard University, USA
Zakaria, Zulkarnay, University Malaysia Perlis, Malaysia
Zhang, Weiping, Shanghai Jiao Tong University, China
Zhang, Wenming, Shanghai Jiao Tong University, China
Sensors & Transducers Journal (ISSN 2306-8515) is a peer review international journal published monthly online by International Frequency Sensor
Association (IFSA). Available in both: print and electronic (printable pdf) formats. Copyright © 2013 by International Frequency Sensor Association.
All rights reserved.
http://www.sensorsportal.com/
SSeennssoorrss && TTrraannssdduucceerrss JJoouurrnnaall
CCoonntteennttss
Volume 159
Issue 11
November 2013
www.sensorsportal.com ISSN 2306-8515
e-ISSN 1726-5479
Research Articles
A Method of Using Digital Image Processing for Edge Detection of Red Blood Cells
Jinping Li, Hongshan Mu, Wei Xu.............................................................................................. 1
Research and Implementation of Pattern Recognition Based on Adaboost Algorithm
Luqun Chang, Zhengfu Bian ...................................................................................................... 7
Image Based Computer-Aided Manufacturing Technology
Zhanqi Hu, Xiaoqin Zhang, Jinze Li, Wei Li ............................................................................... 13
A Detection Algorithm for Image Copy-move Forgery Based on Improved Circular
Projection Matching and PCA
Yanfen Gan, Jing Cang.............................................................................................................. 19
Lossless Compression of Grayscale Digital Image Based on Two-Dimensional
Differential Prediction Algorithm
Wei Liang, Guangxian Zhang, Liang Tao .................................................................................. 26
Unsupervised Segmentation Method for Diseases of Soybean Color Image
Based on Fuzzy Clustering
Jiangsheng Gui, Li Hao, Shusen Sen, Wenshu Li, Yanfei Liu................................................... 32
Level Set Based Shape Model for Automatic Linear Feature Extraction
from Satellite Imagery
Yi Liu, Fansi Kong, Fei Yan........................................................................................................ 39
The Study of Remote Sensing Image Classification Based on Support
Vector Machine
Zhang Jian-Hua.......................................................................................................................... 46
Monitoring Data Cleaning of Urban Tunnels by Fusing PCA and CLARA Algorithms
Kun Hao Tang, Luo Zhong, Lin Li, Guang Yang........................................................................ 54
A Framework for Detecting the Self-heating Source in Oil Tank
Hui Liu, Yunfei Hu ...................................................................................................................... 60
Numerical Simulation of Flame Temperature Field in Rotary Kiln
Gongfa Li, Jia Liu, Hegen Xiong, Jianyi Kong, Zhen Gao,Yikun Zhang, Wentao Xiao,
Fuwei Cheng .............................................................................................................................. 66
Research on Temperature Field and Stress Field of Prefabricate Block Electric
Furnace Roof
Wentao Xiao, Gongfa Li, Guozhang Jiang, Jianyi Kong, Jia Liu, Shaoyang Shi, Yikun
Zhang, Fuwei Cheng, Tao He.................................................................................................... 74
http://www.sensorsportal.com/
Influence Factors on Stress Distribution of Electric Furnace Roof
Jia Liu, Gongfa Li, Guozhang Jiang, Jianyi Kong, Shao Yang Shi, Yikun Zhang,
Wentao Xiao, Fu Wei Cheng, Tao He........................................................................................ 80
Measurement and Control System of Self-propelled Levelling Machine
Based on Inclination Sensor and Laser
Liu Jiangtao, Cui Baojian, Jiang Haiyong, Yi Jinggang ............................................................. 87
Research and Key Bearing Part Simulation of Finite Element Analysis Platform
of Gantry Crane Based on ANSYS
He Bin-Hui .................................................................................................................................. 92
Research on the Rock Fragmentation Under Static and Dynamic Loads
Peng Qing .................................................................................................................................. 100
Dynamic Crack Propagating Mechanism of Rock Materials Based on Different
Weighted Functions
Huijun Wu, Jing Zhao, Zhongchang Wang, Xunguo Zhu, Deshen Zhao................................... 107
Study on Multi-environment Factor Monitoring System of the Livestock Breeding
Xiao Yu, Hai-Ye Yu .................................................................................................................... 113
Multichannel Seismic Deconvolution Using Bayesian Method
Li Yanqin, Peng Hongwei, Yu Ruihong...................................................................................... 120
Towards a Confidence-based Routing Algorithm in Delay Tolerant Network
Li-Xia Liu, Xiao-Hua Qiu............................................................................................................. 126
ANN RBF Based Approach of Risk Assessment for Aviation ATM Network
Lan Ma, Deng Pan, Zhijun Wu................................................................................................... 132
RBF Neural Network Combined with Knowledge Mining Based on Environment
Simulation Applied for Photovoltaic Generation Forecasting
Dongxiao Niu, Ling Ji, Xiaomin Xu, Peng Wang........................................................................ 138
An Efficient Optimization Algorithm for Super High Dimensional Numerical Function
Inspired by Cellular Differentiation
Yanjiang Wang, Chengna Yuan, Hui Li, Yujuan Qi ................................................................... 143
Method of Optimal Scheduling of Cascade Reservoirs based on Improved Chaotic
Ant Colony Algorithm
Hongmin Gao, Baohua Xu, Zhenli Ma, Lin Zhang, Chenming Li............................................... 149
An Achievable Rate Region for Relay Multiple Access Channel
Based on Decode-and-Forward
Xiaoxia Song, Yong Li................................................................................................................ 155
Method of Reservoir Optimal Operation Based on Improved Simulated Annealing
Genetic Algorithm
Chenming Li, Baohua Xu, Hongmin Gao, Xueying Yin, Lizhong Xu ......................................... 160
Conceptual Analysis of Node Application Program of Semantic Reasoning Network
Shi Yun Ping .............................................................................................................................. 167
Solution to Degree Diameter-2 Graph Problem in Parallel Machine Tools Control
Network Based on Genetic Algorithm
Xiang Chen, Jun-Yong Tang, Yong Zhang................................................................................ 174
Water Inrush Source Identification of Mine Based on D-S Evidence Theory
Jianyu Xiao, Aili Yang ................................................................................................................ 179
Design and Application of Counter's Interface IP Core Based on Avalon Bus
Huazhu Wu, Chunguang Zhang, Naihao Luo............................................................................ 185
Research on a Novel Deadbeat Hybrid Flux Observer
Guifeng Wang, Jianguo Jiang, Shutong Qiao, Fangtian Zhu .................................................... 190
A Multilevel SVPWM Algorithm for Linear Modulation and Over
Modulation Operation
Wei Wu, Jianguo Jiang, Guifeng Wang, Shutong Qiao, He Liu................................................. 198
The Intelligent Fiber Knitted Fabrics Development and Function Test
Chen Guofen, Yang Lefang ....................................................................................................... 206
Loading and Unloading Manipulator Controlled by Built-in PLC in CNC System
Hu Fuwen................................................................................................................................... 212
Blind Separation of Noisy Mixed Speech Based on Wiener Filtering and Independent
Component Analysis
Hongyan Li, Xueying Zhang....................................................................................................... 218
Support Vector Machine Based Intrusion Detection Method Combined
with Nonlinear Dimensionality Reduction Algorithm
Xiaoping Li ................................................................................................................................. 226
A Kind of Network Intrusion Detection Algorithm Based on Quantum-behaved
Particle Swarm Optimization
Qiang Song, Lingxia Liu ............................................................................................................. 230
A Mechanism of Initiative Transmission to Send Message on WebGIS
Luo Xiangang, Xie Zhong, Luo Jin............................................................................................. 236
Parallel Computing of Polymer Chains Based on Monte Carlo Method
Hong Li, Bin Gong, Chang-Ji Qian, He-Bei Gao........................................................................ 242
Research on the Evaluation of Low Carbon Economic Development
by Fuzzy Algorithm
Wang Fan................................................................................................................................... 249
XML Data Retrieval Model Based On Two-dimensional Table Datasets
Lichuan Gu, Qingyan Guo, Youhua Zhang................................................................................ 255
The Reasoning Mining of Inner-outer Unknown Information Base
on Dynamic Packet Sets
Xiaojuan Wang, Yang Wang...................................................................................................... 263
New Exponential Strengthening Buffer Operators and Numerical Simulation
Cuifeng Li, Huajie Ye, Zhengguo Weng..................................................................................... 271
An Improved Differential Evolution Algorithm Based on Statistical Log-linear Model
Zhehuang Huang ....................................................................................................................... 277
Study Horizontal Screw Conveyors Efficiency Flat Bottomed Bins EDEM Simulation
Yanping Yao, Wenjun Meng, Ziming Kou.................................................................................. 282
Decoupling Research of a Three-dimensional Force Tactile Sensor Based on Radical
Basis Function Neural Network
Feilu Wang, Xin Sun, Yubing Wang, Junxiang Ding, Hongqing Pan, Quanjun Song,
Yong Yu, Feng Shuang............................................................................................................. 289
Optimization of Power Allocation for a Hybrid Wind-Hydro Power System
Xuankun Song, Hui Zhou, Zhi-Juan Shang, Rong Cong ........................................................... 299
Dynamical Modeling and Optimization of the Roll Forming Machine
based on the Particle Swarm Optimization with Negative Gradient
Na Risu, Li Qiang ....................................................................................................................... 307
Optimal Design Based on Rough Set and Implementation of Worm Gear
in Valve Actuator
Fan Chang Xing, Wu Qiang ....................................................................................................... 313
Symmetrical Structure Strong Drive Capability Optocoupler Sensor
Lei Tian, Xinquan Lai ................................................................................................................. 319
Research and Calibration Experiment of Characteristic Parameters
of High Temperature Resistance Strain Gauges
Wang Wen-Rui, Zhang Jia-Ming, Ren Xin, Nie Shuai ............................................................... 324
Research in Algorithm of Image Processing Used in Collision Avoidance Systems
Hu Bin......................................................................................................................................... 330
A Study and Analysis on a Perceptual Image Hash Algorithm
Based on Invariant Moments
Hu Bin......................................................................................................................................... 337
Application of a Force Sensor in Wire Bonding Process
Lei Zhou, Jiangang Li, Lanhui Fu, Zexiang Li ............................................................................ 345
Automatic Measurement and Monitoring Technology for Oil Well
Qibin Yang, Wei Sun, Dazhong Ren.......................................................................................... 351
Design and Practical Application of the Solar Radiation Simulator
Changwu Xu, Kaicheng Huo, Zhigang Ren ............................................................................... 358
Electrical Power System Harmonic Analysis Using Adaptive BSS Algorithm
Chen Yu, Liu Yueliang ............................................................................................................... 364
Nanomanipulators with Reduced Hysteresis and Interferometers Build in NanoFabs
Petr Luskinovich, Vladimir Zhabotinskiy .................................................................................... 369
The study of Sensors Market Trends Analysis Based on Social Media
Shianghau Wu, Jiannjong Guo .................................................................................................. 374
Refractive Index Sensing by Using Nano Fiber Coupler
Liu Tiedong ................................................................................................................................ 379
A Multiple Factors Safety Prediction Algorithm Based on Genetic Neural Networks
in Coal Mine Safe-state
Qi Li-Xia...................................................................................................................................... 385
The Radar Tomography Detection for the Abnormal Moisture Regions
of Huge Grain Pile
Su Yanping, Lian Feiyu .............................................................................................................. 391
Physiological State Monitoring System of the Home Elderly Based
on Multi-frequency Narrowband Power Line Communications
Xujia Wang, Liang Dong ............................................................................................................ 397
Strength Study of Spiral Flexure Spring of Stirling Cryocooler
Wang Wen-Rui, Nie Shuai, Zhang Jia-Ming .............................................................................. 404
Design for Crack Detection System of Wall in Houses Based on SCM
Liu Tianye................................................................................................................................... 409
Vector Quantization Codebook Design and Application Based on the Clonal
Selection Algorithm
Mengling Zhao, Hongwei Liu ..................................................................................................... 415
Study of 3D Wireless Sensor Network Based on Overlap Method
Wu Mingxin ................................................................................................................................ 422
Experimental Study on the Engineering Characteristics of Lime Soil with Different
Lime Content
Sun Xiao, Zhao Mingjie, Wang Kui, Lin Junzhi .......................................................................... 431
Feasibility Research on the System of Real-time Traffic Information Between Taxis
and Passengers
Chao Wang, Yun Cai, Yina Zhang, Wei Jie Sun ....................................................................... 437
Collision Energy Dissipation Calculation and Experiment for Impact Damper
with Particles
Xiao Wang-Qiang, Li Wei........................................................................................................... 442
Longitudinal Ultrasonic Guided Waves for Monitoring the Minor Crack of Rotating
Shaft with Galfenol Transducer
Xiaoyu Wang, Jun Zou, Fuji Wang, Ronghua Li ........................................................................ 450
A Method Based on the Improved Matrix Pencil Algorithm Designed for Voltage
Flicker Detection
Chen Shi, Li Xing-Yuan, Zhu Rui-Ke, Luo Xiao-Yi..................................................................... 457
Research on an Approach to Ultrasound Diffraction Tomography Based on Wave
Theory
Hao-Quan Wang, Le-Nan Cao, Hui-Zhen Chai ......................................................................... 464
Authors are encouraged to submit article in MS Word (doc) and Acrobat (pdf) formats
by e-mail: editor@sensorsportal.com. Please visit journal’s webpage with preparation instructions:
http://www.sensorsportal.com/HTML/DIGEST/Submition.htm
International Frequency Sensor Association (IFSA).
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
1
SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss
© 2013 by IFSA
http://www.sensorsportal.com
A Method of Using Digital Image Processing for Edge
Detection of Red Blood Cells
1 Jinping LI, 2 Hongshan MU, 2 Wei XU
1 Software School, East China Institute of Technology,
330013, China
2 Economic Development Zone Guanglan Avenue 418, Nanchang330013, China
2 Tel.: 13699532208
2 E-mail: muhongshan@126.com
Received: 22 July 2013 /Accepted: 25 October 2013 /Published: 30 November 2013
Abstract: Using a series of digital image processing methods, such as gray stretch, median filter, threshold
segmentation, edge extraction and detection, detect the variations of red blood cells, realize the goal of
identifying the shapes of variable red blood cells, and good results have been achieved. In conclusion, the
average detection rate of abnormal red blood cells is above 80 %. This inspiring and conductive method is a
tentative/experimental research which will play a good demonstration role in further application of image
processing and detection in medical field. Copyright © 2013 IFSA.
Keywords: Image processing, Morphological, Red blood cells, Detection.
1. Introduction
With the development of information technology,
image processing technology is becoming an
essential and effective tool in scientific research. It is
especially widely used and effective in the field of
biomedical engineering.
Besides CT technique of digital image processing,
it is also widely used in medical diagnosis, such as
chromosome analysis, cancer cell detection, etc [1-4].
According to geometric features obtained of the red
blood cells, we can detect and research the
pathological red blood cells.
The method will play a good demonstration role
for further application in the field of image
processing technology in medicine.
2. Experimental Methods
2.1. Experimental Material
The image samples of medical red blood cells
(provided by people’s hospital in Nanfeng County,
Jiangxi Province).
2.2. Experimentation
2.2.1. The Grey Image Stretching of the Red
Blood Cells
While being a way of image linear
transformation, the grey image stretching can greatly
improve the visual effect for us. The gray level of all
Article number P_1529
http://www.sensorsportal.com/
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
2
points in the image is transformed according to linear
transformation function, which is one dimensional
linear function [1].
BA fxfxf += *)( (1)
For gray level transform equations:
BAAAB fDfDfD +== *)( (2)
The parameters fA is the slope of the linear
function, fB is the y-axis intercept, DA shows the
grayscale of the input image, and DB shows the
grayscale of output image. While fA>1, the contrast of
the output image will be increased; While fA<1, the
contrast of the output image will be reduced; while fA
=1 and fB≠0, the gray value of all the pixels will go
up or down, and its effect is to make the image darker
or brighter; If fA<0, dark areas will be brighten, and
bright areas will be darken, complementary
operations of the images are completed by the point
operation. In a particular case, while fA=1, fB=0, the
output image is the same as the input figure; While
fA=-1, fB=255, the grayscale of the input image and
the output image is precisely reversed [5].
The Original Red Blood is shown in Fig. 1.
Fig. 1. The original red blood.
The Enhanced Image by the Gray Stretch is
shown in Fig. 2.
Fig. 2. Red blood cell image by gray stretch.
2.2.2. The Mean Filter of the Red Blood
Cell Image
Median filter of image is a kind of enhancement
technique of image spatial domain filtering [1],
which can reflect the texture characteristics of the
spatial image, such as physical location, shape, size,
and so on. The mean value of all pixels in the field is
assigned to the output corresponding pixels so as to
achieve the purpose of smoothing.
3×3 templates are adopted in this paper, and
average filtering process is shown in Fig. 3. Fig. 3(a)
shows a small part of an image, with a total of
9 pixels. Pi (i= 0, 1... 8) shows the grey value of
pixels; Fig. 3(b) shows a 3×3 template, and Ki (i = 0,
1... 8) is called template coefficient; Odd numbers
(such as 3×3, 5×5) are generally taken for the
consideration of template size, and the median filter
can be divided into the following steps:
1) Make Ki (i= 0, 1... 8);
2) Make the template roam in the image, and
make pixels of k0 and p0 overlap in Fig. 3. Gray value
r0 can be calculated by the next type of output image
which is corresponding to pixel p0 (as shown in
Fig. 3(c);
3) All grey values of the pixels in the enhanced
image can be obtained by calculating each pixel
according to the type of Fig. 3(c).
The process of the median filter can be applied to
all the spatial filtering methods, that is to say, the
function of the spatial filter is realized in the process
of each pixel area through applying template
convolution method.
Fig. 3. Average filtering process.
In order to remove noises, the image with a 3×3
templates has used the smooth processing operation.
Results are shown in Fig. 4.
Fig. 4 (a). Red blood cell image median filter smoothing,
the image before smoothing.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
3
Fig. 4 (b). Red blood cell image median filter smoothing,
the image after smoothing.
2.2.3. Threshold Segmentation of Red Blood
Cell Image
Threshold segmentation is a kind of regional
segmentation technology [2], which can make the
image gray level split into two or more gray intervals
according to the user specified. Then using the
differences in the gray level between extraction of
target objects and the background, we choose an
appropriate threshold value. By judging whether or
not each pixel in the image meets the requirements of
threshold value, we determine which area the pixels
in the image belong to, the target area or background
region. One of the commonly used threshold
processing method is binarization processing of the
image. Select a threshold then convert it to black and
white binary image, which is pretreated by image
segmentation and edge tracing, etc. Using the
threshold value method of human-computer
interaction and windows applications [6], we got the
following red blood cells threshold
segmentation image, see Fig. 5.
2.2.4. Image Edge Detection and Extraction
Edge usually refers to the collection of those
surrounding pixels which have a step change or roof
change, and it is also an important characteristic on
which image segmentation depends. The method of
Laplace operator and Sobel operator are respectively
used to sharpen the red blood cells [1, 7], and the
following respective images can be got as in Fig. 6.
Fig. 5(a). Threshold segmentation of red blood cell image
before threshold segmentation.
Fig. 5 (b). Threshold segmentation of red blood cell image
after threshold segmentation.
(a) Image before sharpening.
(b) Image after sharpening.
Fig. 6. Laplace sharpening processing of the red blood
cell image.
2.2.5. Red Blood Cell Image Processing
2.2.5.1. The Geometrical Characteristics
of the Red Blood Cell Image
Normal mature red blood cells are reddish or
orange, with the shape of a disc, the characteristics of
concentric undertint and pale center, the diameter of
its light coloured area is about 1/3 of the diameter of
the red blood cells. Red blood cell image samples
chosen for test are shown in Fig. 7, the labeled cells
are to be detected, which are random sampling of the
red blood cells.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
4
(a) Image before sharpening.
(b) Image after sharpening.
Fig. 7. Sobel sharpening processing
of the red blood cell image.
First, the software interface of image as shown in
Fig. 9 is processed by gray level stretch, median filter,
threshold segmentation and prepared for the
following extraction of the single red blood cells.
After getting the red blood cell images with greater
contrast which have been removed noises, the
Windows XP system with a drawing software is used
to extract the selected red blood cells images [6].
Number and arrange the selected red blood cells
images, then a new arrangement of red blood cells
images appears as shown in Fig. 10.
Fig. 8. The original red blood cells.
Fig. 9. Software interface of image processing.
Fig. 10. Red blood cells of image selected to be detected.
3. Results and Analysis
Detect the edges of the red blood cell images
according to the images as shown in Fig. 11, we get
detection results of the first level (as shown
in Fig. 12).
In tests one, according to Fig. 12, we can see that
red cells No. 15 and No. 17 are rectangular, not like a
disc as normal red blood cells in medical science,
therefore, we can conclude that the two red blood
cells are abnormal.
In tests two, through binarization process the
single red blood cells are extracted, as shown in
Fig. 13, and are prepared for the next calculation of
geometrical characteristics of red blood cells.
Fig. 11. Edge detection of red blood cell Image
to be detected.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
5
Fig. 12. Edge detection of the red blood cell image
chosen.
Fig. 13. Binarization Processing of images before red blood
cells Detection.
According to the binarization images in Fig. 13,
observe the blood cells erythrocyte shallow areas, the
cells No. 1, 3, 4, 5, 7, 8, 9, 11, 12, 13 can be observed
with no shallow areas, or their light colored areas are
smaller than 1/3 of the diameter of the red blood cells,
so we can conclude that these red blood cells are
abnormal.
In tests three, respectively calculate the
geometrical characteristics of the red blood cells after
binarization processing in Fig. 14. Data aggregation
of the red blood cell geometric characteristics is
shown in Fig. 15.
Fig. 14. Calculation of the red blood cell geometrical
characteristics.
Fig. 15. Data aggregation of red blood cells
to geometric features.
With the software used in this experiment, we get
the result that the average area of the normal red
blood cells is 830 or so, but average error range of
red blood cells No. 20 and No. 23 is more than 100,
so they can be regarded as abnormal red blood cells.
Finally, the normal red blood cells detected are
shown in Fig. 16.
Fig. 16. the normal red blood cells image.
4. Conclusions
As we can see, the abnormal rate of the medical
red blood cell image samples was 70 %, which was
provided by the hospital in Fig. 8. However, the
abnormal rate of red blood cells in the image we get
in this experiment was 62.5 %. Therefore, we can
basically conclude that the average detection rate of
abnormal red blood cells in this study is more than
80 %.
In short, through the image processing and
detection process, using a variety of image
processing technologies, we completed the extraction
of single red blood cells, realized the detection of
abnormal red blood cells, and achieved good results.
But for red blood cell image detection there are still
some problems to be solved:
1) Some of the discriminate error rates are still
high, because only geometric features are used for
analysis, while color, texture, the proportion of the
internal structure factors were not considered;
2) Errors existing in the detection process
certainly have some effect on the experimental results;
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 1-6
6
3) To facilitate testing and ensure higher detection
rate, red blood cell images without overlapping are
selected in this study, tests for overlapping cells will
be explored with new treatment methods in the future.
Acknowledgements
We would like to thank Jiangxi Department of
Education of Science and Technology Plan Projects
(GJJ11490).
4. References
[1]. Fu Desheng, Graphic image processing, Southeast
University Press, Nanjing, 2001.
[2]. Nie Bin, Medical image segmentation technology and
its progress, Mount Taishan Medical School Journal,
Vol. 23, No. 4, 2002, p. 422-426.
[3]. Tian Ya, Rao Nini, Pu Li Xin, The latest dynamic of
domestic medical image processing technology,
Journal of University of Electronic Science and
Technology, Vol. 3, No. 2, 2001, pp. 3-9.
[4]. Jia Minyi, Diagnostics, People's Medical Publishing
House, Beijing, 1981.
[5]. M. Christgan, K. A. Hiller, G. Schmalz et al.,
Accuracy of quantitative digital subtraction
radiography for determining changes in calcium mass
in mandibular bone, Journal of Periodontal
Researches, Vol. 33, Issue 3, 1998, pp. 138-149.
[6]. Cheng Wenbin, Jin Xiangfeng, Visual C++ utility,
Beijing University of Aeronautics and Astronautics
Press, Beijing, 1995.
[7]. Xiao Yi, Long Mei, Ni I, Li Hongyang, Computer
application in medical image processing, Medical
Education and Technology of China, Vol. 15, No. 4,
2001, pp. 203-204.
___________________
2013 Copyright ©, International Frequency Sensor Association (IFSA). All rights reserved.
(http://www.sensorsportal.com)
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
7
Sensors & Transducers
© 2013 by IFSA
http://www.sensorsportal.com
Research and Implementation of Pattern Recognition
Based on Adaboost Algorithm
1 Luqun CHANG, 2 Zhengfu BIAN
1 Ji’nan Research Institute of Geotechnical Investigation & Surveying,
59 Lishan Road, 250013, Ji’nan, Shandong Province, China
2 Institute of Land Resources, China University of Mining and Technology,
Xuzhou, Jiangsu Province, China
Received: 28 August 2013 /Accepted: 25 October 2013 /Published: 30 November 2013
Abstract: Pattern recognition and computer vision technology as a long-term subject of concern, which has high
academic value and commercial value. Adaboost is an iterative algorithm, and its core idea is to obtain some
weak classifier with a training set of training. Finally, a much stronger classifier is obtained by combining weak
classifiers. In this paper, we firstly introduce the basic theory of Adaboost algorithm, and then take face
recognition as an application example, the training process and the detection process were achieved respectively
and independently. Experimental results show the detector based on Adaboost algorithm can accurately detect
the location of the face, regardless of their positions, scale, orientation, lighting conditions, expressions, etc., and
it has a smaller detection error. Specifically, the detector can effectively detect multiple faces, and it also has
much higher detection accuracy. Copyright © 2013 IFSA.
Keywords: Pattern recognition, Computer vision, Face detection, Adaboost algorithm.
1. Introduction
Pattern detection algorithm has been developed
over the past decades. Each detection algorithm is
developed in a particular application context, and so
we can analysis these few of detection methods into
two main types: image based methods and feature
based methods. The first method always uses
classifiers trained statically with a given sample set.
Then each classifier is scanned through the samples
set. The other method locates by detecting particular
features. Pattern detection algorithm used is both
image based and feature based. It is image based in
the sense that the method uses a learning algorithm to
train the classifier with some chosen trained positive
and negative samples. And it is also feature based
because the lots of features chosen by the learning
algorithm are directly related to the particular
features. The boosting techniques improve the
performances of base classifiers by re-weighting the
training examples. Learning using Boosting is the
main contribution of Pattern detection.
The important issue of face information
processing technology has always been the pattern
recognition cut the field of machine vision research
concern, is one of the important components of this
stage based on biometric identification technology.
What’s more, face as images and video of the most
important visual objects, one in computer vision,
pattern recognition, multimedia technology research
occupies an important position. Face detection and
retrieval of information processing of human face and
retrieval based on content. In recent years, a very
active direction in intelligent man-machine interface,
it has a very wide range of applications, content-
based retrieval, the digital word video processing,
security and other fields [1-2].
Article number P_1530
http://www.sensorsportal.com/
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
8
In recent years, face detection has made
considerable development. Amit et al. had presented
a method for shape detection, and then this method
was applied to detect frontal-view faces in still
intensity images [3]. Viola et al. proposed a new
detection method based on the integral image features
and Adaboost algorithm [4]. Speed and performance
of this cascade classifier is equivalent of ANN
method proposed by Rowley [5]. Later, Li's research
group developed this method for multi-view face
detection [6]. Kauth et al. proposed a blob
representation to extract a compact, structurally
meaningful description of multispectral satellite
imagery. Craw et al. proposed a located method
based on a shape template of a frontal-view face, and
a Sobel filter is first used to extract edges. These
edges features are grouped together to search for the
template of a face based on several constraints. These
above methods all have better face detection
performance.
Recently, many researchers started to use
Adaboost algorithm in Pattern detection. Adaboost is
an iterative algorithm, and its core idea is to obtain
some weak classifier with a training set of training.
Finally, a much stronger classifier is obtained by
combining weak classifiers. In this paper, we firstly
introduce the basic theory of Adaboost algorithm,
and then the training process and the detection
process were achieved respectively and
independently. Experimental results show the
detector based on Adaboost algorithm can accurately
detect the location of the face, and it has a smaller
detection error. Specifically, the detector can
effectively detect multiple faces, and it also has much
higher detection accuracy.
2.Adaboost Algorithm
2.1. Adaboost Review
Adaboost algorithm is based on gray-scale
distribution of target features, which chooses to use
the Haar characteristics [4-7]. Haar feature is based
on the characteristics of the integral image, and this
feature is mainly used in the gray scale image. Its
advantages consist of calculation simpler and faster
extraction. Adaboost algorithm first extracts image
Haar characteristics, and then through the training
process to obtain Haar feature is converted into many
weak classifiers, and finally these weak classifiers are
optimized combination to use for face detection.
Fig. 1 shows the flow chart of detection based on
Adaboost algorithm.
Integral image snapped original any point in the
upper left pixel in the image obtained by adding the
pixel value as the current point image. The
accumulation of all the pixel values of the upper left
portion of the integral image for each point (x,y) of
the midpoint of the value of the original image (x,y):
∑ ≤≤
=
00 ,
),(),(
yyxx
yxiyxii (1)
where i(x,y) is the original image, ii(x,y) is the
integral image. Fig. 2 shows the process of
computing integral image. Fig. 3 shows an example
of integral image.
Fig. 1. The flow chart of face detection based
on Adaboost algorithm.
(a) image data (b) integral image data
Fig. 2. The data of the integral image.
(a) original image (b) integral image
Fig. 3. A specific example of integral image.
According to the characteristics of the integral
image, the sum value of the arbitrary rectangular
region of pixels can be calculated by using the
formula (1) with a quick computing process and the
computation time is fixed. The advantage of this
feature to design Haar feature extraction and the
machine calculation time fixed. It is because of Haar
feature extraction speed is fast enough, it making the
Adaboost detection algorithms has become one of the
fastest detection algorithms.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
9
As we all know, common Haar feature is
designed according to the characteristics of regional
gray contrast. Fig. 4 shows four types of classical
Haar features. Haar features of this reflects the
characteristics of the image in grayscale distribution
characteristics bow l into the face detection problem
which, the problem is converted into how to find a
better Haar feature to describe the characteristics of
the image gray distribution. Adaboost algorithm
selects from a large number of Haar characterized the
optimal characteristics, and convert it into the
corresponding weak classifier classification used, so
as to achieve the purpose of the target classification.
Adaboost algorithm training process is to select the
process of the weak classifiers.
(a) feature A (b) feature B
(c) feature C (d) feature D
Fig. 4. Four types of classical Haar features.
2.2. Training Process
Each Haar feature is corresponding with one of
the weak classifiers, but not any Haar feature can be
better described gradation distribution of certain
characteristics. There is a key research object to be
solved how to select optimal Haar characteristics and
then produce into a classifier for detection from a
large number of Haar features in Adaboost algorithm
training process. The requirements of the training
sample face close-up image, but vastly different face
shape, so the training sample selection process to take
into account the diversity of the sample. Training
samples need to preprocess before using to train.
Generally speaking, training samples pre-treatment
does not require special algorithm, but the sample
human face gesture to try to be consistent. Fig. 5
shows a set of training samples.
Firstly, we can extract Haar feature for face image
from training set. Then weak classifiers are generated
based on features. Each Haar characteristic is
corresponding to a weak classifier, and each weak
classifier is based on the parameters of its
corresponding Haar characteristics defined. By using
the position information of the above Haar
characterized statistical training samples can be
obtained corresponding to the characteristic
parameters. The weak classifiers definition formula is
as follows:
⎩
⎨
⎧ ≤
=
otherwise
pfpif
xh jjjj
j 0
1
)(
θ
, (2)
where the characteristic parameters pj represent
inequality direction, θj is threshold. Weak classifiers
can be divided into different ways according to
statistics, the value of single-domain weak classifiers
and dual-domain values of weak classifiers.
Fig. 5. A set of ORL face training samples.
As defined in the current Haar characterized by
the statistics of the training sample average of
positive samples and negative samples: 1+pjθ , 1−pjθ .
So we can obtain 2/)( 11 −+ +=
pp jjp θθθ . Assume
11 −+ >
pp jj θθ , so 1+=jp , otherwise 1−=jp .
The output result of the weak classifiers to 1 or 0, and
outputs 1 represents a judgment is true, that is a face
image; on the contrary, this is false, i.e. a non-face
image. Single weak classifier limited capacity, and
does not handle objects, so would its group and into a
strong classifier.
Next, we will describe the training process.
Adaboost algorithm training process is the selection
of optimal weak classifiers, and given the weight of
the process [8-9]. Fig.6 shows the training process of
Adaboost algorithm. The specific training algorithm
steps are as follows:
1) Label n training samples, where m samples are
labeled 1+=jy , and n – m non-face sample are
labeled 1−=jy ;
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
10
2) Initialize the weights. The original weights of
each face sample are set as:
m
w
p ∗
=+ 2
1
10
, non-face
samples are set as:
)(2
1
10 mn
w
p −∗
=−
;
Fig. 6. The training process of Adaboost algorithm.
3) Select T weak classifiers (T iterations)
a) In the t-th iteration, compute the iteration error
sum of the j-th weak classifier as:
∑=
−=
n
i jijijj yxhw
1
)(ε , and choose the
minimum iterative error of weak classifiers
)( it xh . Compute
t
t
t ε
εβ
−
=
1
, and set
t
t β
α 1log= , tα is the weight of weak
classifier;
b) Use )( it xh and tβ to update all weights:
i
pp titit ww εβ −
+ = 1
1 . If i-th sample is correct
classification, so 0=iε ; otherwise 1=iε ;
c) Normalize weights:
∑ +
+
+ =
it
it
it
p
p
p w
w
w
1
1
1 ;
d) Set 1+= tt .
4) A strong classifier is obtained by linear
combination of a number of weak classifiers:
⎪
⎩
⎪
⎨
⎧
>
= ∑ ∑=
=
otherwise
xh
xh t
t
0
2
1)( 1
)(
T
1t T
1 t
t α
α (1)
Single weak classifier has poor classification
results, and the training initial error is 0.15, followed
by a gradual rise. So we need the combine the weak
classifiers into a strong classifier to make better
performance.
2.3. Cascade Classifier
A strong classifier can be obtained through using
a combination of some of the weak classifiers by
equation (3), and each strong classifier will have
more performance to detect face. If a plurality of
strong classifiers are cascaded together, then by the
strong classifier at all levels of the detected object is
the possibility of the human face is also the largest.
According to this principle, Adaboost algorithm
introduces a waterfall-type classification is an
associated classifier. The flow of detection algorithm
based on cascade classifiers as Fig. 7.
Fig. 7. The flow of detection algorithm based
on cascade classifiers.
The cascade classifier combines several strong
classifiers to grade series together, and strong
classifier level is complex than other and strict than
others. Detection of non-face images in the front is
ruled out to only face images detected by the strong
classifier at all levels. In addition, because the non-
face images are eliminated in the first few levels of
the cascade classifier, it can speed up the detection
speed of detection algorithm.
3. Face Detection Based
on Adaboost Algorithm
The cascade classifier performance, in order to be
able to use the image detection records cascade
classifier needs to design a detection mechanism, and
its design processing interface. In Order to be able to
detect the size of various scales the human face, the
need to introduce here the detection mechanism of
the multi-scale. There are several commonly used
scale variation methods, but in order to ensure the
detection speed, there are two methods available:
One method to implement scale transformation to the
classification, and also it needs to change the field
values of the weak classifiers. Another method is to
sample to the image at different scales, and this
method is simple to implement, but a little time-
consuming than the former method.
A flow of the detection process is shown in Fig. 8.
Adaboost algorithm based face detection processing
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
11
is a gradation data, so the detection of the first
step is to be detected image is converted into
a grayscale image.
Fig. 8. A flow of the detection process.
As see from Fig. 8, the second step is grayscale
images integral image. The third step of the integral
image detection at different scales, in different scales,
the detection result of the merger.
The fourth step is output after the detection results
under different scales.
On the same scale, when the overlapping part of
the two sub-windows detected face, you need to
consider whether there is the need to merge.
According to the experimental results, we need to
combine overlapping sub-window when the
overlapping part of the face sub-window is over the
current window size of 0.5. This combined method is
to obtain averaging values.
In addition, we also need to combine when the
detected windows are overlapped in different scales.
Generally in the vicinity of the location adjacent
scales duplicate detection, this will not only lead to
repeat testing, but also may cause unnecessary error
detection results.
4. Experimental Verification
In this section, many experimental results are
showed to verify the effectiveness of Adaboost
detection algorithm. Firstly, we implement the
detection on IMM face database. Fig. 9 shows some
detection results on IMM face database. From Fig. 9,
we can see that the detector can accurately detect the
location of the face, and it has a smaller detection
error. Then the detector is implemented on some
network picture. Fig. 10 shows some detection results
on faces set from network. We can also find the
detector can accurately detect the location of the face.
Then we verify the effectiveness of Adaboost
detection algorithm to detect face on some images
with multiple faces. Fig. 11 shows some detection
results on images with multiple faces. From
experimental results, we can easily find the detector
can effectively detect multiple faces, and it also has
much higher detection accuracy.
Fig. 9. Detection results on IMM face database.
Fig. 10. Detection results on faces set from network.
Fig. 11. A few of faces in an image.
In addition, we also need to combine when the
detected windows are overlapped in different scales.
Generally in the vicinity of the location adjacent
scales duplicate detection, this will not only lead to
repeat testing, but also may cause unnecessary error
detection results. Then some experimental results
show the combining process for different windows.
Fig. 12 shows an example of how to merge detection
result: Fig. 12 (a) is the result of a plurality of
windows overlapped in the same scale; Fig. 12 (b) is
a plurality of windows overlapped with each other at
different scales, and Fig. 12 (c) is the final
combining result.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 7-12
12
(a) Overlap in
same scale
(b) Overlap in
different scales
(c) Combine
results
Fig. 12. Results of combining windows.
5. Conclusions
Face detection in pattern recognition and machine
vision technology as a long-term subject of concern,
which has high academic value and commercial
value. The rapid development of related face
technologies, face detection as a key step, has
causing more and more attention of researchers and
research. Adaboost is an iterative algorithm, and its
core idea is to obtain some weak classifier with a
training set of training. Finally, a much stronger
classifier is obtained by combining weak classifiers.
In this paper, we firstly introduce the basic theory of
Adaboost algorithm, and then the training process
and the detection process were achieved respectively
and independently. Experimental results show the
detector based on Adaboost algorithm can accurately
detect the location of the face, and it has a smaller
detection error. Specifically, the detector can
effectively detect multiple faces, and it also has much
higher detection accuracy.
References
[1]. Ming-Hsuan Yang, D. J. Kriegman, N. Ahuja,
Detecting faces in images: a survey, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 24, Issue 1, 2002, pp. 34-58.
[2]. Hyobin Lee, Seongwan Kim, Sooyeon Kim,
Sangyoun Lee, Face detection using multi-modal
features, in Proceedings of the International
Conference on Control, Automation and Systems,
2008, pp. 2152-2155.
[3]. Y. Amit, D. Geman, B. Jedynak, Efficient focusing
and face detection, Face Recognition: From Theory
to Applications, Vol. 163, 1998, pp. 124-156.
[4]. P. Vioia, M. Jones, Rapid object detection using a
boosted cascade of simple features, in Proceedings of
the IEEE Conference on Computer Vision and
Pattern Recognition, Kauai, Hawaii, USA, 2001.
[5]. H. A. Rowiey, S. Baiuja, T. Kanade, Neural network-
based human face detection, IEEE Transactions on
Pattern Anaiysis and Machine Intelligence, Vol. 20,
Issue 1, 1998, pp. 23-38.
[6]. Z. Zhang, S. Z. Li, H. Zhang, Real-time multi-view
face detection, in Proceedings of the Conference on
Automatic Face and Gesture Recognition,
Washington DC, USA, 2002, pp. 149-154.
[7]. A. Treptow, A. Zell, Combining Adaboost learning
and evolutionary search to select features for real-
time object detection, in Proceedings of the
International Conference on Evolutionary
Computation (CEC’2004), Vol. 2,2004,
pp. 2107-2113.
[8]. Zhen Qiu Zhang, Mingling Li, S. Z. Li, Hongliang
Zhang, Multi-view face detection with FloatBoost, in
Proceedings of the 6th IEEE Workshop on
Applications of Computer Vision (WACV’ 02), 2002,
pp. 184-188.
[9]. Yong Ma, Xiaoqing Ding, Real-time rotation
invariant face detection based on cost-sensitive
Adaboost, in Proceedings of the International
Conference on Image Processing (ICIP’2003), Vol. 2,
2003, pp. 921-924.
___________________
2013 Copyright ©, International Frequency Sensor Association (IFSA). All rights reserved.
(http://www.sensorsportal.com)
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
13
SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss
© 2013 by IFSA
http://www.sensorsportal.com
Image Based Computer-Aided Manufacturing Technology
1 Zhanqi HU, 2 Xiaoqin ZHANG, 2 Jinze LI, 1 Wei LI
1 College of Mechanical Engineering, Yanshan University,
Qinhuangdao, Hebei, 066004, China
2 Mechanical and Electrical Engineering College,
Hebei Normal University of Science and technology,
Qinhuangdao, Hebei, 066004, China
1 Tel.: 0335-8057031, fax: 0335-8074783
E-mail: ronghu118@163.com
Received: 19 August 2013 /Accepted: 25 October 2013 /Published: 30 November 2013
Abstract: Image based manufacturing technique is a novel manufacturing method, which is combine of
machining technique and machine vision technique. By using the technique, machine tools can perform cutting
process according to what they see, which is very like that the machine tool is equipped with “eyes”. In this
paper, some researches of author about the subject are proposed, and key techniques are included. Construction
of image based manufacturing system is introduced briefly. The geometrical model is then built from the image
information, in which process shape from shading with adaptive pro-processing method is used. After the model
is built, cutting path is planed, and two cutting paths, line cutting and contour cutting, are conducted. NC
programs are generated automatically, and machining process is then performed. Finally a prototype system
named ImageCAM is introduced. Algorithms developed in our research are verified in the system. Copyright ©
2013 IFSA.
Keywords: Image processing, Bitmap, Line cutting method, Contour cutting method, Layer cutting method,
Shape from shading (SFS).
1. Introduction
Although geometrical modeling techniques is
widely used in product developing and
manufacturing process, some product surface can not
still be described with CAD model. In this situation,
the method is required to transform the surface
information into CAD model, which can be
processed with CAD/CAM software. Reverse
engineering (RE) can get surface date and transform
the data into CAD model, but some special
measurement machines are required in RE process.
The measurement machines are generally expensive,
and measurement process will be time consuming.
When only a picture of the product can be obtained,
general RE technique can not perform copping
process. This is the time, when machine vision based
RE system is required which can extract three
dimension information from one or more pictures of
a product, and transform the information into
geometrical model that CAD system can accept
which is subject researched in the paper.
The objective of machine vision is to make
computer percept three dimension environment
information through two dimension picture, which
can detect not only geometric information of object,
but also shape, position, motion of the object. Basis
of machine vision comes from image processing,
model identification, and artificial intelligence [1].
Machine vision based manufacturing technique is
Article number P_1531
http://www.sensorsportal.com/
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
14
combination of machine vision technique and
manufacturing technique. By using machine vision,
three dimension information of object is extracted,
geometric model of the object is built, CNC program
is generated and machining is conducted. How to get
three dimension of object and built its three
dimension model are important points of machine
vision based manufacturing technique. Prospect of
machine vision based manufacturing technique is
realizing integration of measurement, modeling, and
manufacturing process, which is very like equipping
“eyes” to machine tools, and making machine tools to
cope the object they see.
Development of machine vision technology gives
solid basis to our research. SFS method [2, 3] is an
important method in machine vision research.
References [4-6] develop and improve SFS method,
make machine vision more precision. References
[7, 8] discus the boundaries between surfaces,
making the curve surface smoother. But there will be
much work to do for machining a work-piece
depending only image of an object.
Machine vision based manufacturing technique is
one of advanced manufacturing techniques, which is
integration of image processing, CNC technique,
machining technique, and can be used in such areas
as reverse engineering, fast prototype of product,
manufacturing of work of art.
2. Machine Vision Based Manufacturing
System
Construction of machine vision based
manufacturing system is shown in Fig.1, which
consists of camera, image processing card, computer,
and a machine tool. The camera is used to take the
picture of the object which would be coped. The
image processing card is used to do some pre-
processing of the picture. The task of computer is to
build the geometrical model of the surface, get tool
path and generate CNC program. The program is
transferred to CNC machine tool, with which work-
piece is machined. The process is also called three
dimension copying. The number of cameras depends
on the machine vision method used in the system. For
multiple eyes vision system, more than one camera
may be used. One eye machine vision method is used
in author’s research, so that there is only one camera
in the system.
Fig. 1. Construction of machine vision based
manufacturing system.
Single eye machine vision is a simple one of
machine vision techniques, and also main method
with which three-dimension information can be
extracted from picture. Recovering three-dimension
shape from simple picture is called shape from
shading (SFS). SFS method is used to get three-
dimension information from picture in this research.
SFS method is first proposed by Horn [9], which
is main algorithm in machine vision getting three
dimensions from camera and is based on the fact that
the change of direction of surface leads to the change
of gray degree in the picture of the surface. SFS
method is an algorithm which can extract three
dimensions information from a few pictures of the
object, especially from one picture of the object.
SFS method develops very fast in recent years,
and is being applied in many fields including industry.
Main algorithms of SFS consists of recovering three
dimensions information using sheltered boundary
[10], recovering three dimensions information from
normal direction of surfaces [11], and recovering
three dimensions information from orthogonal
polynomials [12].
Based on above researches, a new algorithm is
proposed in the paper, by which three dimensions
information can be recovered more precision than
previous method.
3. Modeling of Work-Piece
3.1. Extracting Three-Dimension
Information from a Picture
In desired condition, grey of image will meet
reflection map equation:
( ) ( ) ( )
2222 11
1
,,
qpqp
qqpp
qpRyxI
ss
ss
++++
++
==
ρ , (1)
where ( ) ⎟
⎠
⎞⎜
⎝
⎛
∂
∂
∂
∂= y
z
x
zqp ,, is the normal direction
of surface, ( )yxzz ,= is the equation of
surface, ( )ss qp , is the direction of light source,
( )yxI , is the grey of image. Shape from shading is
then calculating normal direction of surface
( )qp, from grey of image ( )yxI , . In order to solve
the ill-character of reflection map equation,
regulation method of depth continue is used. Global
optional function is constructed:
( ) ( )( ) ( )∑∑ +−=
yxyx
yxFqpRyxIE
,,
2 ,,, λ (2)
First item of the equation comes from Eq. (1),
which means difference between actual grey value
),( yxI and the one calculated from normal
parameter p and q. Regulation condition can be
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
15
expressed with continue constrain condition
),( qpF as:
( ) ( ) ( )( ) ( ) ( )( )
( ) ( )( ) ( ) ( )( )22
22
,1,,1,
,,1,,1,
yxqyxqyxpyxp
yxqyxqyxpyxpyxF
−++−++
−++−+=
(3)
Because ( )Tqp 1,, is the normal direction of object
surface, the value of Eq. (3) is the change rate of
normal line direction. Therefore the smaller the value
of second item in Eq. (2), the smoother of object
surface is. The (x, y) in Eq. (2) is discrete coordinate
of image, and summation region is a part of the
image, in which all points are corresponding to the
continue surface on same object.
Our target is to get p(x,y) and q(x,y) which make
the value of Eq. (2) the minimum. Making derivation
of E in Eq. (2) to each ( )yxp , and ( )yxq , , and
letting the derivation be zero, recursive formula of p
and q will be derived:
( ) ( ) ( ) ( ) ( )( )( )
tt qptttt p
RyxqyxpRyxIyxpyxp ,1 ,,,,,,
∂
∂
−+=+ η
(4)
( ) ( ) ( ) ( ) ( )( )( )
tt qptttt q
RyxqyxpRyxIyxqyxq ,1 ,,,,,,
∂
∂
−+=+ η
(5)
where 11 , ++ tt qp are the values of p and q in t+1th
time recursion, and ( )yxpt , , ( )yxqt , are the
average values of p and q in the neighbor region of
(x,y) in tth time recursion.
( ) ( ) ( ) ( ) ( )( )1,1,,1,1, 4
1 −+++−++= yxpyxpyxpyxpyxp ttttt
(6)
( ) ( ) ( ) ( ) ( )( )1,1,,1,1, 4
1 −+++−++= yxqyxqyxqyxqyxq ttttt
(7)
By using the recursive formula, three-dimension
coordinate of an object can be calculated. This
algorithm is based on continue surface, therefore
recovered result is accurate only for single continue
surface, for example, in Fig. 2. But for non-continue
part on an object as the connecting part between two
surface patch, recovered result will be very poor [13],
for example, in Fig. 3. The reason is that total surface
consists of several patches, although each patch is
continuing, but boundary between the patches is not
continued. It dose not meet the condition of above
algorithm.
At the boundary, normal direction of surface
patches change greatly. Refraction of light makes
grey values on neighbor patch are very close, and
shape distortion of recovered surface takes place on
the part. In order to solve the problem, SFS with
adaptive pre-processing algorithm is developed in
author’s research.
(a) bitmap of hemisphere
(b) geometrical model
Fig. 2. Modeling of half sphere surface.
(a) bitmap of chili pepper
(b) geometrical model
Fig. 3. Modeling of chili pepper.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
16
3.2. SFS Algorithm with Adaptive
Pre-processing
If the attenuation character of light intensity is
strengthened, the change of grey degree will increase,
and recovered surface will be more like the sample
surface. The main points of the algorithm is that the
image is divided into some patches firstly, which is
continue inside the patch, and non-continue between
the patches. Then grey degree of each patch is
reduced according to some rules. And last, sample
surface is recovered from pre-processed patches. It is
proved that surface recovered from pre-processed
patches is more accurate than that recovered from
original image.
In order to simplify the problem, pre-processing
of the image is turned into one dimension problem,
and calculating process is performed line by line.
Following is pre-processing procedure:
1) Parameter initialization including threshold value
of grey degree ( GREY), windows threshold value
(FLAG), and counter(N).
2) Reading in a line of the image, comparing grey
degree of each point at the line with the GREY. If
it is greater than GREY, counter N plus one, until
it is smaller than GREY.
3) If the grey degree of a point is smaller than
GREY, counter N is compared with windows
threshold FLAG. If it is smaller than FLAG, N
pixels are processed, otherwise turning to step (2)
until the end of the line.
4) Scanning next line, until total image is processed.
(a) chili pepper
(b) vase
Fig. 4. Modeling by using SFS algorithm with adaptive
pre-processing.
If the GREY and FLAG are selected properly,
process result will be perfect after one scan. If some
light green are there at edge of the image, middle
value wave filtering may be needed. After the
processing, higher quality geometrical model can be
built, as shown in Fig. 4. High quality geometrical
model is the bases if high quality CNC code.
3.3. CNC Programming Using
Approximation Method of Polygon
Cutting path can be derived through train code
tracing to object image contour, then using adaptive
including box (AIB) algorithm or approximation
method of polygon. AIB method can eliminate effect
of noise and get correct contour of work piece, which
has been elaborated in reference [14]. Approximation
method of polygon has higher precision, is the
method used in this paper to generate CNC code.
Information in image is of redundancy,
approximation method of polygon can be used to
descript a contour, which descriptions a curve
contour with a polygon. In order to get proper
approximating effect, an error index Emax is used to
measure degree of approximation.
Fig. 5. Error of approximation method of polygon.
As shown in Fig. 5, assuming curve from A to B
is approximated with line segment AB, letting
( ) ( ) ( )NN yxyxyx ,,,,,, 2211 be coordinate of
points on curve, ( )1,,2 −= Nidi be distance
from ( )ii yx , to line segment AB, we have:
iNi
dE
12max max
−≤≤
= (8)
Procedure of using approximation method of
polygon is as following:
1) Determining start point of cutting path 1P ,
finding point maxP that is furthest point to 1P on
the path. The two points divide the path into two
segments.
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
17
2) Letting initial position of mP is maxP , starting
from 1P , traversing curve from 1P to mP , find
point iP that has maximum error maxE .
3) If maxE is greater than permitting error, let iP be
new mP and turn to step (2).
4) Otherwise, iP is a vertex of polygon, 1P iP is a
line segment, line CNC code (G01) is generated.
Letting iP be new 1P , and maxP be new mP , turn
to step (2).
5) When iP is of coincidence to maxP , the contour
from 1P to maxP is processed. The contour from
maxP to 1P can be processed in same way.
4. Machining Example
Machine vision based manufacturing technique
can be used to machine 2-D curve or 3-D curved
surface. For 3-D curved surface, uncut chip is cut out
layer after layer. Each layer is a plane curve with
same z coordinate. In layer cutting method, 3-D
machine can be turned into 2-D machining, therefore
only plane cutting is discussed. This method is often
called 2.5 axes machining in engineering. After
geometrical model is built, CNC Programming
process of layer cutting is as following:
1) Determining cutting depth, which is distance
between layers. Value of cutting depth is relevant
with material of work piece, cutting tool material,
and machining requirement.
2) Determining height of cutting layer, which is z
coordinate of cutting path of a cutting layer.
3) Calculating the intersection region of the cutting
plane and the curve surface which may be a single
region or several islands, each of which has their
lose boundary.
4) Determining cutting path in 2-D region.
Machining can be performed with line cutting or
circle cutting algorithm. There are some existing
algorithm can be referenced.
After tool path is generated, NC code for special
machine tool can also be generated with post
proposing program for the machine tool.
A prototype system ImageCAM has been
developed in order to prove the algorithm of the
paper, which can perform total process of machine
vision based manufacturing including pre-processing
of image, modeling of work piece, cutting path
planning, and NC program generating. CNC system
used in the research is SKY2000-I, developed by
SKY Co., which is based on Windows platform. The
system has 32 bits CPU, supports normal netware,
and Chinese operating interface, which is widely
used in medium and small machine shops. Fig. 6 is
the machining parameters input interface of
ImageCAM.
Fig. 7 show the four main stages of machining a
water pot from a picture. Fig. 7 (a) is the orignal
bitmap of the water pot. Fig. 7 (b) is the three
dimensions model built from Fig. 7 (a), this is the
most important step of machine vision based
manufacturing process. Precision of machining
depends greatly on the quality of the model. Fig. 7 (c)
is the tool path of the work-piece, line cutting method
is adapted in the research. Fig. 7 (d) is the work-piece
machined in wax material.
Fig. 6. Machining parameters input interface.
Fig. 7 (a). Machining example with machine vision based
manufacturing system – bitmap of sample
Fig. 7 (b). Machining example with machine vision based
manufacturing system – geometrical model.
Fig. 7 (c). Machining example with machine vision based
manufacturing system – tool path from CNC code/
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 13-18
18
Fig. 7 (d). Machining example with machine vision based
manufacturing system – wax work piece.
5. Conclusions
Machine vision based manufacturing technique is
a part of intelligent manufacturing technology, which
can find its application in many engineering regions,
for examples, in fast prototype manufacturing,
reverse engineering, handcraft manufacturing, etc.
Some key techniques are researched in the paper.
Pre-proposing method is proposed, by which high
quality image can be gotten, combining with middle
value filter, therefore machine quality can be
improved. Approximation method of polygon can be
used to generate tool path very fast. Prototype system
ImageCAM can perform total process of machine
vision based manufacturing including pre-processing
of image, modeling of work piece, cutting path
planning, and NC program generating. Which proves
the algorithms of the paper is feasible.
Some problems are still to be researched for the
technique to be used in engineering. Most important
of the problems is precision of the geometrical model
of work piece. Simple and precision modeling
algorithm are still looked for ward to.
References
[1]. Ambarish G. Mohapatra, Computer vision based
smart lane departure warning system for vehicle
dynamics control, Sensors & Transducers, Vol. 132,
Issue 9, September 2011, pp. 122-135.
[2]. Q. Zheng, R. Chellappa, Estimation of illumination
direction, albedo, and shape from shading, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 13, Issue 7, 1991, pp. 680-702.
[3]. J. Oliensis, Shape from shading as a partially well-
constrained problem, CVGIP: Image Understanding,
Vol. 54, Issue 2, 1991, pp. 163-183.
[4]. Takayuki Okatani, Koichiro Deguchi, Shape
reconstruction from an endoscope image by shape
from shading technique for a point light source at the
projection center, Computer Vision and Image
Understanding, Vol. 66, Issue 2, 1997, pp. 119-131.
[5]. P. Dupuis, J. Oliensis, Shape from shading: provably
convergent algorithms and uniqueness results, in
Proceedings of the European Conference on
Computer Vision (ECCV’94), Stockholm, Sweden,
1994, pp. 259-268.
[6]. K. M. Lee, C.-C. J. Kuo, Shape from shading with a
linear triangular element surface model, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, Vol. 15, Issue 8, 1993, pp. 815-822.
[7]. R. Kimmel, A. M. Bruckstein, Global shape from
shading, Computer Vision and Image Understanding,
Vol. 62, Issue 3, 1995, pp. 360-369.
[8]. I. Shimshoni, R. Kimmel, A. M. Bruckstein,
DIALOGUE, Global shape from shading, Computer
Vision and Image Understanding, Vol. 64, Issue 1,
1996, pp. 188-189.
[9]. B. K. P. Horn, Obtaining shape from shading
information, in shape from shading, MIT Press,
Cambridge, MA, 1989, pp. 123-171.
[10]. Jin-Hua Sheng, A three-dimensional image
processing system shape from shading, Pattern
Recognition & Artificial Intelligence, Vol. 4, Issue 4,
1991,
pp. 46-52.
[11] Ma Songde, Zhang Zhengyou, Computer Vision,
Science press, Beijing, 1998, pp. 194-208.
[12]. Bang-Hwan Kim, Rae-Hong Park, Shape from
shading and photometric stereo using surface
approximation by legend polynomials, Computer
Vision and Image Understanding, Vol. 66, Issue 33,
1997, pp. 255-270.
[13]. He Bin, Ma Tianyu, Wang Yunjian et al., Visual C++
data image processing, People Post Press, Beijing,
2001, pp. 5-8.
[14]. Jia Xicun, 3-D copy technique based on machine
vision, Master Thesis, Yahanshan University, 2003.
___________________
2013 Copyright ©, International Frequency Sensor Association (IFSA). All rights reserved.
(http://www.sensorsportal.com)
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 19-25
19
SSSeeennnsssooorrrsss &&& TTTrrraaannnsssddduuuccceeerrrsss
© 2013 by IFSA
http://www.sensorsportal.com
A Detection Algorithm for Image Copy-move
Forgery Based on Improved Circular Projection
Matching and PCA
Yanfen GAN, Jing CANG
Information science and technology department
Guangdong University of Foreign Studies South China Business College,
Guangzhou, 510545, China
Tel.: +8613751800978
E-mail: 315139752@qq.com
Received: 19 August 2013 /Accepted: 25 October 2013 /Published: 25 November 2013
Abstract: Because general algorithms seldom detect copy-move forgery with angle rotation and block matching
algorithms are very time-consuming, this paper proposes a detection algorithm which is able to detect copy-
move forgery with rotation of certain angles by using the direction invariance of the circular projection vector.
At the same time, considering the influence of random noise and brightness changes on the circular projection
vector, circular projection vector is improved and becomes robust. In order to avoid the time-consuming, this
algorithm also constructs a data matrix by using the circular projection vector of each image block to
significantly reduce the dimensionality of the data requiring during principal component analysis. Its detection
speed is obviously faster than general block matching algorithms. The experimental results show that the
improved circular projection matching algorithm is less time consuming, able to resist a certain degree of angle
rotation in copy-move operations, and relatively robust to the influence of random noise and illumination.
Copyright © 2013 IFSA.
Keywords: Circular projection matching, Rotation invariance, PCA, Image matching, Copy-move forgery.
1. Introduction
Generally, a picture is used as the strong evidence
for describing the occurrence of a thing. With the
rapid development of image editing software, digital
images are easily modified and it is becoming
increasingly easier to generate vivid images. Forged
images are more and more frequently found in
tabloids, magazines and mainstream media or used as
evidence submitted to courts. Some are even used in
scientific frauds. Therefore, it is urgent to study
image forgery detection algorithms so as to identify
the authenticity and integrity of images.
Detection algorithms for digital image forgery fall
into two types. One includes the image authentication
based on fragile watermarking and the image
authentication based on digital signatures, both of
which belong to the active algorithm. Watermark or
auxiliary information is required to be added in
advance. In the real world, however, this information
cannot be added to all images in advance. Another
type is the blind detection technique, a kind of
passive authentication. The blind detection technique
relies on the characteristics of images to authenticate
them. However, due to the complex authentication
method, it has become a more challenging subject
and also has wider potential applications.
Article number P_1532
http://www.sensorsportal.com/
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 19-25
20
There are a variety of methods for image
forgery [9]. The copy-move forgery in the same
image is common which copies and moves a portion
of the image into another position so as to cover up
persons or objects in the image. The detection
algorithm for this kind of image forgery is a blind
detection technique and has been proposed. Most of
them judge the authenticity based on the
characteristic that copy-move in images will result in
large similar areas. Fridrich [1] first analyzed the
exhaustive search algorithm and proposed a block
matching detection method based on discrete cosine
transform (DCT) which significantly improves the
efficiency of the exhaustive search algorithm.
Popescu [2] proposed a similar method which reduces
the dimensionality of feature vectors by using
principal component analysis (PCA) instead of
discrete cosine transform. Experiments have shown
that his algorithm is more efficient. Subsequently,
many scholars carried out further studies on the basis
of block matching algorithms. Wu Qiong [3]
transformed the original image into a 1/4 similar
image using discrete wavelets before partitioning,
used singular value decomposition to reduce
dimensionality and finally located according to the
lexicographic order. Jing Li et al. [4, 10] proposed
detection and location algorithm for image region
duplication forgery based on phase correlation in
order to overcome the low efficiency of block
matching algorithms. In general, this class of
algorithms does not resist rotation and noise and is
time consuming. This paper mainly studies how to
improve the anti-rotation and anti-noise of the
algorithm and how to improve detection efficiency
using grayscale information. This algorithm first
divides the detected blocks by row and column, then
calculates the improved circular projection vectors of
the image blocks, constructs the projection data
matrix of the image through the circular projection
vectors of all image blocks, uses PCA to reduce the
dimensionality of the matrix, sorts that matrix in the
lexicographic order and finally judges whether the
sorted adjacent image block is the image block that
has been copied and moved by confidence distance.
The flow chart of the algorithm is shown as Fig. 1
below. The reminder of this paper is organized as
follows: section 2 introduces the improved circular
projection matching algorithm. Application of PCA
in the detection algorithm for image copy-move
forgery is presented in Section 3. Experimental
results and conclusion are described in Section 4 and
5 respectively.
2. The Improved Circular Projection
Matching Algorithm
2.1. Conventional Pixel Matching Rotation
Sensitivity Analysis
Assume that f1 and f2 with the same size of m×n
are two moved image blocks in the forged image S
shown as Fig. 2, where f1 is the copied image block
and f2 is the moved and pasted image block.
Conventional matching algorithms compare the
copied image block f1 and the pasted image block f2
in the similarity in the gray values of the
corresponding pixels, and take similarity as the
criterion for relevance. It is obvious that when
relative rotation exists between f1 and f2,
corresponding pixels change. When the rotation angle
is small, the algorithms can find correct matching
positions due to the similarity between adjacent
pixels. However, when the rotation angle is big, the
difference of gray value increases and conventional
matching algorithms do not work.
Fig. 1. Flow chart of the algorithm.
Fig. 2. The forged image S.
The key to solving the angle matching problem
that random angle rotation exists between the copied
image f1 and the pasted image f2 is to find a rotation
1f
2f
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 19-25
21
invariant. The circular projection matching algorithm
[5, 6] was proposed based on the isotropy and
projection features of a circle. With this algorithm,
the sum of the pixels in the concentric circles with
different radiuses in the circle is calculated and taken
as the projection data at the radius.
P(r)=∑
=
π
θ
θ
2
0
),(rT Rr ≤≤0 , (1)
where ( ) ( )22 ϕφ −+−= nmr and R is the radius of
the maximum inscribed circle of the image, shown
as Fig. 3.
Fig. 3. Circular projection vector.
When the image rotates, the pixels on the circle
with any radius also rotate along with the radiuses of
the concentric circles, so P(r) remains unchanged,
which means that it is theoretically possible to realize
matching of random angle rotation.
2.2. The Improved Circular Projection
Matching Algorithm
As a matter of fact, some problems will arise
when the circular projection matching algorithm is
used to detect the copy-move in the image, including
changing the brightness of copied image and adding
noise, which requires us to improve the circular
projection matching algorithm.
2.2.1. Noise Impact Analysis and
Improvement
Images transmitted in channels are interfered by
different kinds of noise, including the most common
Gaussian impulsive noise. Gaussian impulsive noise
refers to a noise signal whose frequency spectrum
components are uniformly distributed (white noise)
and whose amplitudes obey the Gaussian distribution.
It is also addible and regarded as one kind of white
noise. Literature [8] has proved that Gaussian
impulsive noise does not change image gray values
and it only affects the alternating current component
of an image. At the same time, in analyzing each
projection vector, it is found that the bigger the radius
of the projection vector is, the greater the cumulative
change is. In other words, the influence of the
alternating current component on the image increases
with the radius. Literatures [4, 5] improve the
algorithm by replacing the projection value with the
grayscale average of each concentric circle. The
component in the projection is
P1(r)= P(r)/ S(r) , (2)
where S(r) is the number of pixels included in the
concentric circle whose radius is r.
2.2.2. Brightness Impact Analysis
and Improvement
The gray value of each pixel of an image is
mainly decided by the brightness of the light
reflected by the surface of a scene, so light will
brighten or darken the overall image, which is
equivalent to adding a direct current component.
Similarly, the grayscale and contrast changes can also
be attributed to the influence of the direct current
component or the alternating current component of an
image. How to reduce the influence of the alternating
current component has been discussed above. As to
how to reduce the influence of the direct current
component, it is required to reconstruct the circular
projection vector and enable it to have grayscale
translation invariance instead of averaging.
The improvement made in literatures [5, 6] is:
( ) ( ) ( )
R R
r 0 r 0
2 ( P( r ) / SP r P r ( rS ) )r
= =
= ∑ ∑ , (3)
According to Formula (3), changes in noise and
light intensity will result in the changes in the
alternating current component of a circular projection
and as a result, the gray values P(r) at different
radiuses increase as the radius increases. In this case,
Literatures [5, 6] adopted the idea of normalization
for processing.
( ) ( ) ( )
( ) ( ) ( )
R R
r 0 r 0
3 2
( ( P( r ) / S( r
P r P r / S r
P r S r / S) )) r
= =
= =
= ∑ ∑
, (4)
At the same time, considering that the main
characteristics of a scene are concentrated on the
center, the variable weight is used.
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( )
( )
( ) ( ) ( )
R R
r 0 r 0
R R
r 0 r 0
R R
r 0 r 0
R R
r
4
1
0 r
3
0
( ( P( r ) / S( r ) ))
( ( P( r
P r W r P r
W r P r S r / S r
W r P r / S r
W r
) / S( r ) ))
( ( ) ( P( r ) / S( r ) ))
( ( P( r ) / S
P r
P r S r / S r ,( r ) ))
= =
= =
= =
= =
= =
= −
= −
= − =
=
=
=
∑ ∑
∑ ∑
∑ ∑
∑ ∑
(5)
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 19-25
22
where W(r) is the variable weight vector. This paper
uses the inconsistency correction method to reduce
the influence of brightness based on variable weights.
( ) ( ) ( )5 1
R
1
r 0
P ( r
P r W r P r
)
( )
R 1
==
+
−
∑
, (6)
Equations (5) and (6) subtract an estimated direct
current component. The difference is that Equation
(5) directly estimates the projection of the original
component while Equation (6) uses the projection
data in which noise has been filtered out.
2.2.3. Analysis of the Experimental Results
of the Improved Circular Projection
Matching Algorithm
The improved circular projection matching
algorithm above is verified. A point whose
coordinate is (148, 70) is located. After the image is
rotated by 60 degrees, the coordinate of the point
becomes (182, 118).
For the point (148, 70), a circular projection is
conducted according to Formula (1). The radius of
the circular projection is 7. The projection vector of
the original image is P1(r) = [43 24 18.634 16.012
14.357 13.975 13.994 13.943].
Gaussian noise (σ =0.01) is added to the point
(182, 118). After brightness γ is adjusted, a circular
projection is conducted according to Formula (5) to
obtain the new projection vector P4(r)= [0.2181
0.30401 0.32234 0.3046 0.27542 0.26652 0.26472
0.261870].
Similarly, Gaussian noise (σ =0.01) is added to
the point (182,118). After brightness γ is adjusted, a
circular projection is conducted according to Formula
(6) to obtain the new projection vector P5(r)=
[0.14850 0.09486 0.01871 -0.00154 0.014067 -
0.00076 -0.00656 -0.00651]. After calculations, the
correlation coefficient between P1(r) and P4(r) is
0.9487 and the correlation coefficient between P1(r)
and P5(r) is 0.9701. The results show that the new
projection vector obtained according to Equation (6)
has better performance in rotation, noise and
brightness change resistance.
3. Application of PCA in the Detection
Algorithm for Image Copy-Move
Forgery
PCA is a linear dimensionality reduction method
based on one-dimensional vectors. It is used to find
the best low-dimensional representation of the
original high-dimensional data based on least mean-
square error. It can transform multiple indicators into
a few comprehensive indicators and select a few
important variables from multiple variables through
linear variation so as to reduce dimensionality.
Therefore, the first step is to move the pixel one
by one to divide the image whose size is m × n
(512 × 512) into L image blocks whose sizes are
2k×2k. A circular projection is conducted for each
image block. The grayscale function of the 2k× 2k
image blocks is f (x, y). The circular projection of a
pixel is the one-dimensional vector
PT=[P(0),P(1)…P(K)]. L one-dimensional vectors are
written into the projection feature matrix of the image
U=[PT(1) PT(2)…PT(L)]T , where PT(i) (i=1,2…L) is
the row vector constituted by the circular projection
of the image block. It can be seen that the size of U
is L×K.
Then PCA is used to reduce the dimensionality of
the matrix U. The steps are as follows:
Step 1 :Normalize the projection feature matrix U.
The data in the matrix have different natures and
different orders of magnitude. If no normalization is
applied, the impact of high-value indicators will
reduce the impact of low-value indicators in the
analysis, so that small data are ignored. As a result,
various data will participate in the operation at
unequal weights. There are a great many methods for
normalization. Here the standard normal distribution
normalization is used:
( ) ( ) ( )T T
2’
T T T
1( P i ) ( P (i )-PP i P i / )
L
(i ) = − ∑ , (7)
where PT(i)’ is the normalized row vector and Unew is
the matrix composed of the normalized row vectors.
Step 2: Obtain the covariance matrix V of Unew.
V=
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎣
⎡
npnn
p
p
vvv
vvv
vvv
21
22221
11211
, (8)
where Vij (i=1, n, j=1, p) is the correlation
coefficient between variables PT(i)’ and PT(j)’ and is
calculated according to the following formula.
Vij=
∑ ∑
∑
= =
=
−−
−−
n
k
n
k
jTkjTiTkiT
n
k
jTkjTiTkiT
pppp
pppp
1 1
22
1
)()(
))((
,
(9)
Step 3: Obtain the feature valueλ i of V and the
feature vector ei(i=1, p) of the corresponding
feature value.
Step 4: Calculate the contribution and cumulative
contribution of the principal component:
The contribution of the principal component Wi is
Wi=
∑
=
n
k
k
i
1
λ
λ
(i=1, p) ,
(10)
Sensors & Transducers, Vol. 159, Issue 11, November 2013, pp. 19-25
23
and the cumulative contribution Qi is
Qi=
∑
∑
=
=
n
k
k
i
k
i
1
1
λ
λ
, (11)
Get the 1st, 2nd, …, hth principal components
corresponding to the feature values λ 1, λ 2 λ h
(h≤ p ) whose cumulative contribution is 85 %.
Step 5: Construct the dimensionality reduction
transformation matrix S with the feature vectors
corresponding to the feature values λ 1, λ 2 λ h
sorted in the descending order. Calculate
U’=Unew S⋅ and complete the transformation from the
high-dimensional U into the low-dimensional U’.
Finally sort U’ according to the lexicographic order
and seek out the copy-move area in combination with
the offset confidence distance according to the sorted
image.
4. Experimental Results and Analysis
In order to verify the validity of this algorithm,
this paper selects 150 natural images and use
Photoshop for copy-move. The methods for copy-
move include rotation of 0°-30°(150 images),
rotation of 30°-60°(150 images) and rotation of
above 60° (150 images). For the above images,
Gaussian noise ( 01.0=σ ) is added and brightness γ =0.6 is adjusted.
The computer configuration on which all
experiments are conducted is CPU dual core (TM)
2.4 GHz, a memory of 2GB and the Windows XP
operating system. The size of the image block is
16×16. Matlab 7 is used to program the algorithm in
this paper and other relevant algorithms.
First, the above forged image sample is used to
test the resistance of the algorithm proposed in this
paper against rotation angles. The test results are
shown in Table 1.
The data in the table show that the detection rate
can exceed 93 % for copy-move forgery after rotation
of 0°-60°. Fig. 4 and Fig. 5 are the detection results
of a test image called “sunflower” which is copied
and pasted after rotation of 30°and 60° respectively.
It is shown that both can be detected correctly.
However, when the rotation angle exceeds 60°, the
detection rate of the algorithm proposed in this paper
decreases sharply. The circular projection matching
algorithm is theoretically rotation invariant.
However, during its application on the computer, if
the interpolation of the image block, whose rotation
angle becomes bigger after copy, is greater when
pasted, the projection difference between the original
image block and that of the pasted image block will
be greater and the detection will fail.
Table 1. Test of copy-move forgery detection after rotation
of different angles.
Type of image Number
of images Detected Detection
rate (%)
Rotate 0°-60° 300 280 93.3
Rotate >60° 150 12 8.0
(a) Original image
(b) Copied and pasted image after rotation of 30°
(c) Detected image
Fig. 4. Detection result of copy-move forgery
after rotation of 30°.
Second, comparison is made with algorithms
proposed in literatures [2-4] in timeliness. Because
the number of blocks is a crucial factor that affects
the