The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Classical geometrical approach to circle fitting—review and new developments

C Rusu,M Tico, P Kuosmanen, E Delp
Download: PDF
Added 2008-04-07

Normal Mammogram Detection Based on Local Probability Difference Transforms and Support Vector Machines

W Chiracharit, Y Sun, P Kumhom, K Chamnongthai, CF Babbs, EJ Delp
Added 2008-04-07

The Emergence of Clusters in the Global Telecommunications Network

Sorin Adam Matei, Seungyoon Lee, Peter Monge, François Bar

Studies of international telecommunication networks in past years have found increases in density, centralization, and integration. More recent studies, however, have identified trends of decentralization and regionalization. The present research examines these structural changes in international telephone traffic among 110 countries between 1989 and 1999. It examines the competing theoretical models of core-periphery and cluster structures. The initial results show lowered centralization and inequality in the network of international telecommunications traffic. Statistical p* procedures demonstrate significant interactions within countries in blocks of similar economic development status, geographic region, and telecommunications infrastructure development status. Specifically, countries with less developed economic and telecommunications status showed significant increases in tendencies to connect to each other and to reciprocate ties. Altogether, the result supports the idea that the global telecommunications network is moving toward a more diversified structure with the emergence of cohesive and interconnected subgroups. The findings have implications for global digital divide and developmental gap issues.

Added 2008-04-07

Digital watermarking: algorithms and applications

CI Podilchuk, EJ Delp
Download: PDF

Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text

Added 2008-04-07

Fully automatic face recognition system using a combined audio-visual approach

A Albiol, L Torres, EJ Delp
Download: PDF

This paper presents a novel audio and video information fusion approach that greatly improves automatic recognition of people in video sequences. To that end, audio and video information is first used independently to obtain confidence values that indicate the likelihood that a specific person appears in a video shot. Finally, a post-classifier is applied to fuse audio and visual confidence values. The system has been tested on several news sequences and the results indicate that a significant improvement in the recognition rate can be achieved when both modalities are used together.

Added 2008-04-07

Decomposing parameters of mixture Gaussian model using genetic and maximum likelihood algorithms on dental images

N Majdi-Nasab, M Analoui, EJ Delp

We present new approaches based on Genetic Algorithms (GAs), Simulated Annealing (SA) and Expectation Maximization (EM) for determining parameters of the mixture Gaussian model. GAs are adaptive search techniques designed to search for near-optimal solutions of large-scale optimization problems with multiple local maxima. It has been shown that GAs are independent of initialization parameters and can efficiently optimize functions in large search spaces while the solution obtained by EM is a function of initial parameters. There is a relatively high likelihood of achieving sub-optimal solution, due to trapping in local maxima. In this work, we propose a combination of Genetic Algorithm with EM (Interlaced GA–EM) to improve estimation of Gaussian mixture parameters. The method uses population of mixture models, rather than a single mixture, iteratively in both GA and EM to determine Gaussian mixture parameters. To assess the performance of the proposed methods, a series of Gaussian phantoms, based on the ‘Modified Shepp–Logan’ method, were created. All proposed methods were employed to estimate the tissue parameters in each phantom and applied on Micro Computed Tomography (μCT) of dental images. The proposed method offers an accurate and stable solution for parameter estimation on Gaussian mixture models, with higher likelihood of achieving global optimal minima. Obtaining such accurate parameter estimation is a key requirement for image segmentation approach, which rely on a priori knowledge of tissue model parameters.

Added 2008-04-07

Wyner–Ziv Video Coding With Universal Prediction

Z Li, L Liu, EJ Delp
Download: PDF

The coding efficiency of a Wyner-Ziv video codec relies significantly on the quality of side information extracted at the decoder. The construction of efficient side information is difficult thanks in part to the fact that the original video sequence is not available at the decoder. Conventional motion search methods are widely used in the Wyner-Ziv video decoder to extract the side information. This substantially increases the Wyner-Ziv video decoding complexity. In this paper, we propose a new method to construct side estimation based on the idea of universal prediction. This method, referred to as Wyner-Ziv video coding with universal prediction (WZUP), does not perform motion search or assume an underlying model of the original input video sequences at the decoder. Instead, WZUP estimates the side information based on its observations on the past reconstructed video data. We show that WZUP can significantly reduce decoding complexity at the decoder and achieve a fair side estimation performance, thus making it possible to design both the video encoder and the decoder with low computational complexity

Added 2008-04-07

Error concealment in MPEG video streams over ATM networks

P Salama, NB Shroff, EJ Delp
Download: PDF

When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly a problem with data loss or erasures. In this paper we describe techniques to address this problem in the context of asynchronous transfer mode (ATM) networks. Our techniques can be extended to other types of data networks such as wireless networks. In ATM networks channel errors or congestion cause data to be dropped, which results in the loss of entire macroblocks when MPEG video is transmitted. In order to reconstruct the missing data, the location of these macroblocks must be known. We describe a technique for packing ATM cells with compressed data, whereby the location of missing macroblocks in the encoded video stream can be found. This technique also permits the proper decoding of correctly received macroblocks, and thus prevents the loss of ATM cells from affecting the decoding process. The packing strategy can also be used for wireless or other types of data networks. We also describe spatial and temporal techniques for the recovery of lost macroblocks. In particular, we develop several optimal estimation techniques for the reconstruction of missing macroblocks that contain both spatial and temporal information using a Markov random field model. We further describe a sub-optimal estimation technique that can be implemented in real time

Added 2008-04-07

Advances in digital video content protection

E Lin, A Eskicioglu, R Lagendijk, E Delp
Added 2008-04-07

Block artifact reduction using a transform-domain Markov random field model

Z Li, EJ Delp

The block-based discrete cosine transform (BDCT) is often used in image and video coding. It may introduce block artifacts at low data rates that manifest themselves as an annoying discontinuity between adjacent blocks. In this paper, we address this problem by investigating a transform-domain Markov random field (TD-MRF) model. Based on this model, two block artifact reduction postprocessing methods are presented. The first method, referred to as TD-MRF, provides an efficient progressive transform-domain solution. Our experimental results show that TD-MRF can reduce up to 90% of the computational complexity compared with spatial-domain MRF (SD-MRF) methods while still achieving comparable visual quality improvements. We then discuss a hybrid framework, referred to as TSD-MRF, that exploits the advantages of both TD-MRF and SD-MRF. The experimental results confirm that TSD-MRF can improve visual quality both objectively and subjectively over SD-MRF methods.

Added 2008-04-07

An enhancement of leaky prediction layered video coding

Y Liu, P Salama, Z Li, EJ Delp
Download: PDF

In this paper, we focus on leaky prediction layered video coding (LPLC). LPLC includes a scaled version of the enhancement layer within the motion compensation (MC) loop to improve the coding efficiency while maintaining graceful recovery in the presence of error drift. However, there exists a deficiency inherent in the LPLC structure, namely that the reconstructed video quality from both the enhancement layer and the base layer cannot be guaranteed to be always superior to that of using the base layer alone, even when no drift occurs. In this paper, we: 1) highlight this deficiency using a formulation that describes LPL; 2) propose a general framework that applies to both LPLC and a multiple description coding scheme using MC and we use this framework to further confirm the existence of the deficiency in LPLC; and 3) furthermore, we propose an enhanced LPLC based on maximum-likelihood estimation to address the previously specified deficiency in LPLC. We then show how our new method performs compared to LPLC.

Added 2008-04-07

Ontology in information security: a useful theoretical foundation and methodological tool

V Raskin, CF Hempelmann, KE Triezenberg, S Nirenburg

The paper introduces and advocates an ontological semantic approach to information security. Both the approach and its resources, the ontology and lexicons, are borrowed from the field of natural language processing and adjusted to the needs of the new domain. The approach pursues the ultimate dual goals of inclusion of natural language data sources as an integral part of the overall data sources in information security applications, and formal specification of the information security community know-how for the support of routine and time-efficient measures to prevent and counteract computer attacks. As the first order of the day, the approach is seen by the information security community as a powerful means to organize and unify the terminology and nomenclature of the field.

Added 2008-04-07

The user non-acceptance paradigm: INFOSEC's dirty little secret

SJ Greenwald, KG Olthoff, V Raskin, W Ruch

This panel will address users’ perceptions and misperceptions of the risk/benefit and benefit/nuisance ratios associated with information security products, and will grope for a solution, based on the psychology of personality trait-factoring results, among other multidisciplinary approaches, to the problem of user non-acceptance of information security products. This problem has acquired a much more scientific guise when amalgamated with the psychology of personality and reinforced by reflections from the field on patterns of user behavior. A gross simplification of the main thrust of the panel is this thesis: if we start profiling the defenders rather than the offenders and do it on the basis of real science rather than very crude personality tests, then we will, at the very least, understand what is happening and possibly create a desirable profile for sysadmins, CIOs, and perhaps even CFOs. This swept-under-the-rug problem is information security’s “dirty little secret.” No other forum is designed to address this, and it may well become yet another major conceptual and paradigmatic shift in the field, of the type initiated in the NSPWs over the last decade. We know that the panel will generate an assured considerable interest among the participants.

Added 2008-04-07

Ontological semantics, formal ontology, and ambiguity

Sergei Nirenburg, Victor Raskin

Ontological semantics is a theory of meaning in natural language and an approach to natural language processing (NLP) which uses an ontology as the central resource for extracting and representing meaning of natural language texts, reasoning about knowledge derived from texts as well as generating natural language texts based on representations of their meaning. Ontological semantics directly supports such applications as machine translation of natural languages, information extraction, text summarization, question answering, advice giving, collaborative work of networks of human and software agents, etc. Ontological semantics pays serious attention to its theoretical foundations by explicating its premises; therefore, formal ontology and its relations with ontological semantics are important. Besides a general brief discussion of these relations, the paper focuses on the important theoretical and practical issue of the distinction between ontology and natural language. It is argued that this crucial distinction lies not in the (inaccurately) presumed nonambiguity of the one and the well-established ambiguity of the other but rather in the constructed and overtly defined nature of ontological concepts and labels on which no human background knowledge can operate unintentionally to introduce ambiguity, as opposed to pervasive uncontrolled and uncontrollable ambiguity in natural language. The emphasis on this distinction, we argue, will provide better theoretical support for the central tenets of formal ontology by freeing it from the Wittgensteinian and Rortyan retreats from the analytical paradigm; it also reinforces the methodology of NLP by maintaining a productive demarcation between the language-independent nature of ontology and language-specific nature of the lexicons, a demarcation that has paid off well in consecutive implementations of ontological semantics and their applications in practical computer systems.

Added 2008-04-07