The growth of networked multimedia systems has magnified the need for image copyright protection. One approach used to address this problem is to add an invisible structure to an image that can be used to seal or mark it. These structures are known as digital watermarks. In this paper we describe two techniques for the invisible marking of images. We analyze the robustness of the watermarks with respect to linear and nonlinear filtering, and JPEG compression. The results show that our watermarks detect all but the most minute changes to the image.
The growth of networked multimedia systems has created a need for the copyright protection of digital images. Copyright protection involves the authentication of image ownership and the identification of illegal copies of a (possibly forged) image. One approach is to mark an image by adding an invisible structure known as a digital watermark. In this paper we further study techniques for marking images introduced in [1]. In particular, we describe how our techniques withstand random errors. We also provide more details relative to our verification procedure. Finally, we discuss the recently proposed IBM attack.
The growth of networked multimedia systems has complicated copyright enforcement relative to digital images. One way to protect the copyright of digital images is to add an invisible structure to the image (known as a digital watermark) to identify the owner. In particular, it is important for Internet and image database applications that as much of the watermark as possible remain in the image after compression. Image adaptive watermarks are particularly resistant to removal by signal processing attacks such as filtering or compression. Common image adaptive watermarks operate in the transform domain (DCT or wavelet); the same domains are also used for popular image compression techniques (JPEG, EZW). This paper investigates whether matching the watermarking domain to the compression transform domain will make the watermark more robust to compression.
Two classes of digital watermarks have been developed to protect the copyright ownership of digital images. Robust watermarks are designed to withstand attacks on an image (such as compression or scaling), while fragile watermarks are designed to detect minute changes in an image. Fragile marks can also identify where an image has been altered. This paper compares two fragile watermarks. The first method uses a hash function to obtain a digest of the image. An altered or forged version of the original image is then hashed and the digest is compared to the digest of the original image. If the image has changed the digests will be different. We will describe how images can be hashed so that any changes can be spatially localized. The second method uses the Variable-Watermark Two-Dimensional algorithm (VW2D) [1]. The sensitivity to changes is user-specific. Either no changes can be permitted (similar to a hard hash function), or an image can be altered and still be labeled authentic. Latter algorithms are known as semi-fragile watermarks. We will describe the performance of these two techniques and discuss under what circumstances one would use a particular technique.
The growth of new imaging technologies has created a need for techniques that can be used for copyright protection of digital images. Copyright protection involves the authentication of image content and/or ownership and can be used to identify illegal copies of a (possibly forged) image. One approach for copyright protection is to introduce an invisible signal known as a digital watermark in the image. In this paper, we describe digital image watermarking techniques, known as perceptually based watermarks, that are designed to exploit aspects of the human visual system. In the most general sense, any watermarking technique that attempts to incorporate an invisible mark into an image is perceptually based. However, in order to provide transparency (invisibility of the watermark) and robustness to attack, more sophisticated use of perceptual information in the watermarking process is required. Several techniques have been introduced that incorporate a simple visual model in the marking procedure. Such techniques usually take advantage of frequency selectivity and weighing to provide some perceptual criteria in the watermarking process. Even more elaborate visual models are used to develop schemes that not only take advantage of frequency characteristics but also adapt to the local image characteristics, providing extremely robust as well as transparent schemes. We present examples from each category - from the simple schemes that guarantee transparency to the more elaborate schemes that use visual models to provide robustness as well as transparency.
We describe a blind watermarking technique for digital images. Our technique constructs an image-dependent watermark in the discrete wavelet transform (DWT) domain and inserts the watermark in the most signifcant coefficients of the image. The watermarked coefficients are determined by using the hierarchical tree structure induced by the DWT, similar in concept to embedded zerotree wavelet (EZW) compression. If the watermarked image is attacked or manipulated such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronization.Our technique also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. The visual adaptive scheme also takes advantage of the tree structure. Finally, a template is inserted into the watermark to provide robustness against geometric attacks. The template detection uses the cross-ratio of four collinear points.
While Digital Watermarking has received much attention in recentyears, it is still a relatively young technology. There are fewaccepted tools/metrics that can be used to evaluate the suitabilityof a watermarking technique for a specific application. This lack ofa universally adopted set of metrics/methods has motivated us todevelop a web-based digital watermark evaluation system called theWatermark Evaluation Testbed or WET. There have beenmore improvements over the first version of WET. Weimplemented batch mode with a queue that allows for user submittedjobs. In addition to StirMark 3.1 as an attack module, we addedattack modules based on StirMark 4.0. For a new image fidelitymeasure, we evaluate conditional entropy as an image fidelitymeasure for different watermarking algorithms and different attacks.Also, we show the results of curve fitting the Receiver OperatingCharacteristic (ROC) analysis data using the Parzen window densityestimation. The curve fits the data closely while having only twoparameters to estimate.
Methodologies and tools for watermark evaluation and benchmarking facilitate the development of improved watermarking techniques. In this paper, we want to introduce and discuss the integration of audio watermark evaluation methods into the well-known web service Watermark Evaluation Testbed (WET). WET is enhanced by using. A special set of audio files with characterized content and a collection of single attacks as well as attack profiles will help to select special audio files and attacks with their attack parameters.
In this paper we discuss natural language watermarking, which uses the structure of the sentence constituents in natural language textin order to insert a watermark. This approach is different from techniques, collectively referred to as “text watermarking,” which embed information by modifying the appearance of text elements,such as lines, words, or characters. We provide a survey of the current state of the art in natural language watermarking and introduce terminology, techniques, and tools for text processing. We also examine the parallels and differences of the two watermarking domains and outline how techniques from the image watermarking domain may be applicable to the natural language watermarking domain.
Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.
Robust watermarks are evaluated in terms of image fidelity and robustness. We extend this framework and apply reliability testing to robust watermark evaluation. Reliability is the probability that a watermarking algorithm will correctly detect or decode a watermark for a specified fidelity requirement under a given set of attacks and images. In reliability testing, a system is evaluated in terms of quality, load, capacity and performance. To measure quality that corresponds to image fidelity, we compensate for attacks to measure the fidelity of attacked watermarked images. We use the conditional mean of pixel values to compensate for valumetric attacks such as gamma correction and histogram equalization. To compensate for geometrical attacks, we use error concealment and perfect motion estimation assumption. We define capacity to be the maximum embedding strength parameter and the maximum data payload. Load is then defined to be the actual embedding strength and data payload of a watermark. To measure performance, we use bit error rate (BER) and receiver operating characteristics (ROC) and area under the curve (AUC) of the ROC curve of a watermarking algorithm for different attacks and images. We evaluate robust watermarks for various quality, loads, attacks, and images.
With the proliferation of cameras in handheld devices that allows users to capture still images and videos, providing users with software tools to efficiently manage multimedia content has become essential. In many cases users desire to organize their personal media content using high-level semantic labels. In this paper we will describe low-complexity algorithms that can be used to derive semantic labels, such as “indoor/outdoor,” “face/not face,” and “motion/not motion” for mobile video sequences. We will also describe a method for summarizing mobile video sequences. We demonstrate the classification performance of the methods and their computational complexity using a typical processor used in many mobile terminals.
In this work we identify vulnerabilities of on-demand multicast routing protocols for multi-hop wireless networks and discuss the challenges encountered in designing mechanisms to defend against hem. We propose BSMR, a novel secure multicast routing protocol that withstands insider attacks from colluding adversaries. Our protocol is a software-based solution and does not require additional or specialized hardware. We present simulation results which demonstrate that BSMR effectively mitigates the identified attacks.
Multi-hop wireless networks rely on node cooperation to provide multicast services. The multi-hop communication offers increased coverage for such services, but also makes them more vulnerable to insider (or Byzantine) attacks coming from compromised nodes that behave arbitrarily to disrupt the network. In this work we identify vulnerabilities of on-demand multicast routing protocols for multi-hop wireless networks and discuss the challenges encountered in designing mechanisms to defend against them. We propose BSMR, a novel secure multicast routing protocol designed to withstand insider attacks from colluding adversaries. Our protocol is a software-based solution and does not require additional or specialized hardware. We present simulation results which demonstrate that BSMR effectively mitigates the identified attacks.
Peer-to-peer streaming systems are becoming highly popular for IP Television (IPTV). Most systems can be categorized as either tree-based or mesh-based, and as either pushbased or pull-based. However, there is a lack of clear understanding of how these different mechanisms perform comparatively in a real-world setting. In this paper, we compare two representative streaming systems using mesh-based and multiple tree-based overlay routing through deployments on the PlanetLab widearea experimentation platform. To the best of our knowledge, this is the first study to directly compare streaming overlay architectures in real Internet settings. Our results indicate that mesh-based systems inject a much higher number of duplicate packets into the network, but they perform better under a variety of conditions. In particular, mesh-based systems give consistently higher application goodput when the number of overlay nodes, or the streaming rates increase. They also perform better under churn and large flash crowds. Their performance suffers when latencies among peers are high, however. Overall, mesh-based systems appear to be a better choice than multi-tree based systems for peer-to-peer streaming at a large scale.