Adaptive image watermarking algorithm based on an efficient perceptual mapping model with FPGA implementation
Abstract
Watermark imperceptibility is a significant factor for keeping watermarked images looking perceptually similar to the original ones. Effective watermark imperceptibility requires the creation of a perceptual model that simulates the human visual system to
efficiently hide the watermark bits in places where the human eye cannot observe it. Current perceptual-based watermarking models use complex computations that are difficult to implement in embedded systems or in real time applications. In this thesis, a low-complexity, integer-based Lifting Wavelet Transform (LWT) was utilized to create a perceptual mapping model that is mainly relied on a new texture mapping model called Accumulative Lifting Difference (ALD). The ALD is combined with a simplified edge detection and luminance masking models to obtain a comprehensive perceptual mapping model that has high noise tolerance and it is based on low complexity calculations. The proposed model was 7% faster than the fastest pixel-based compared model, with an enhanced average PSNR gain of 2.75 dB. In comparison to the largest noise tolerance sub-band compared model, the proposed JND model had a PSNR gain of 1.78 dB and an execution speed that was up to 90% faster. The perceptual model is utilized in a proposed image watermarking algorithm to determine the maximum watermark embedding intensity that is not visible to the human eye. The experimental results show that the proposed algorithm produced high quality watermarked images with SSIM value larger than 0.95 for all tested images and it was robust against different geometric and non-geometric attacks. For real time response, the presented watermarking algorithm was designed, implemented and tested on Altera® Cyclone-II FPGA device using VHDL. The parallel structure of the design allowed the system to be executed at a clock speed of 101.02 MHz, which make it suitable for emerging realtime
applications.