2012 |
Parthasarathy, Saranya Bloom Filter Based Intrusion Detection for Smart Grid 2012. Abstract | BibTeX | Tags: MS Thesis, smart grid, thesis @masterthesis{ParMSThesis12, title = {Bloom Filter Based Intrusion Detection for Smart Grid}, author = {Saranya Parthasarathy}, year = {2012}, date = {2012-05-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {This thesis addresses the problem of local intrusion detection for SCADA (Supervisory Control and Data Acquisition) field devices in the smart grid. A methodology is proposed to detect anomalies in the communication patterns using a combination of n-gram analysis and Bloom Filter. The predictable and regular nature of the SCADA communication patterns is exploited to train the intrusion detection system. The protocol considered to test the proposed approach is MODBUS which is used for communication between a SCADA server and field devices in power system. The approach is tested for attacks like HMI compromise and Man-in-the-Middle. Bloom Filter is chosen because of its strong space advantage over other data structures like hash tables, linked lists etc. for representing sets. The advantage comes from its probabilistic nature and compact array structure. The false positive rates are found to be minimal with careful choice of parameters for Bloom Filter design. Also the memory-efficient property of Bloom Filter makes it suitable for implementation in resource constrained SCADA components. It is also established that the knowledge of physical state of the power system i.e., normal, emergency or restorative state can help in improving the accuracy of the proposed approach.}, keywords = {MS Thesis, smart grid, thesis}, pubstate = {published}, tppubtype = {masterthesis} } This thesis addresses the problem of local intrusion detection for SCADA (Supervisory Control and Data Acquisition) field devices in the smart grid. A methodology is proposed to detect anomalies in the communication patterns using a combination of n-gram analysis and Bloom Filter. The predictable and regular nature of the SCADA communication patterns is exploited to train the intrusion detection system. The protocol considered to test the proposed approach is MODBUS which is used for communication between a SCADA server and field devices in power system. The approach is tested for attacks like HMI compromise and Man-in-the-Middle. Bloom Filter is chosen because of its strong space advantage over other data structures like hash tables, linked lists etc. for representing sets. The advantage comes from its probabilistic nature and compact array structure. The false positive rates are found to be minimal with careful choice of parameters for Bloom Filter design. Also the memory-efficient property of Bloom Filter makes it suitable for implementation in resource constrained SCADA components. It is also established that the knowledge of physical state of the power system i.e., normal, emergency or restorative state can help in improving the accuracy of the proposed approach. |
Kollegala, Revathi S The Robust Classification of Hyperspectral Images Using Adaptive Wavelet Kernel Support Vector Data Description (AWK-SVDD) 2012. Abstract | BibTeX | Tags: fusion, MS Thesis, thesis @masterthesis{KolMSThesis12, title = {The Robust Classification of Hyperspectral Images Using Adaptive Wavelet Kernel Support Vector Data Description (AWK-SVDD)}, author = {Revathi S Kollegala}, year = {2012}, date = {2012-05-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {Detection of targets in hyperspectral images is a specific case of one-class classification. It is particularly relevant in the area of remote sensing and has received considerable interest in the past few years. The thesis proposes the use of wavelet functions as kernels with Support Vector Data Description for target detection in hyperspectral images. Specifically, it proposes the Adaptive Wavelet Kernel Support Vector Data Description (AWK-SVDD) that learns the optimal wavelet function to be used given the target signature. The performance and computational requirements of AWK-SVDD is compared with that of existing methods and other wavelet functions. An introduction to target detection and target detection in the context of hyperspectral images is given. This thesis also includes an overview of the thesis and lists the contributions of the thesis. A brief mathematical background into one-class classification in reference to target detection is included. Also described are the existing methods and introduces essential concepts relevant to the proposed approach. The use of wavelet functions as kernels with Support Vector Data Description, the conditions for use of wavelet functions and the use of two functions in order to form the kernel are checked and analyzed. The proposed approach, AWKSVDD, is mathematically described. The details of the implementation and the results when applied to the Urban dataset of hyperspectral images with a random target signature are given. The results confirm the better performance of AWK-SVDD compared to conventional kernels, wavelet kernels and the two-function Morlet-Radial Basis Function kernel. The problems faced with convergence during the Support Vector Data Description optimization are discussed. The thesis concludes with the suggestions for future work.}, keywords = {fusion, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Detection of targets in hyperspectral images is a specific case of one-class classification. It is particularly relevant in the area of remote sensing and has received considerable interest in the past few years. The thesis proposes the use of wavelet functions as kernels with Support Vector Data Description for target detection in hyperspectral images. Specifically, it proposes the Adaptive Wavelet Kernel Support Vector Data Description (AWK-SVDD) that learns the optimal wavelet function to be used given the target signature. The performance and computational requirements of AWK-SVDD is compared with that of existing methods and other wavelet functions. An introduction to target detection and target detection in the context of hyperspectral images is given. This thesis also includes an overview of the thesis and lists the contributions of the thesis. A brief mathematical background into one-class classification in reference to target detection is included. Also described are the existing methods and introduces essential concepts relevant to the proposed approach. The use of wavelet functions as kernels with Support Vector Data Description, the conditions for use of wavelet functions and the use of two functions in order to form the kernel are checked and analyzed. The proposed approach, AWKSVDD, is mathematically described. The details of the implementation and the results when applied to the Urban dataset of hyperspectral images with a random target signature are given. The results confirm the better performance of AWK-SVDD compared to conventional kernels, wavelet kernels and the two-function Morlet-Radial Basis Function kernel. The problems faced with convergence during the Support Vector Data Description optimization are discussed. The thesis concludes with the suggestions for future work. |
2008 |
Shankar, Sonu Parameter Assignment for Improved Connectivity and Security in Randomly Deployed Wireless Sensor Networks via Hybrid Omni/Uni-Directional Antennas 2008. Abstract | BibTeX | Tags: mmsn, MS Thesis, thesis @masterthesis{ShaMSThesis08, title = {Parameter Assignment for Improved Connectivity and Security in Randomly Deployed Wireless Sensor Networks via Hybrid Omni/Uni-Directional Antennas}, author = {Sonu Shankar}, year = {2008}, date = {2008-08-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {Conguring a network system to operate at optimal levels of performance requires a comprehensive understanding of the effects of a variety of system parameters on crucial metrics like connectivity and resilience to network attacks. Traditionally, omni-directional antennas have been used for communication in wireless sensor networks. In this thesis, a hybrid communication model is presented where-in, nodes in a network are capable of both omni-directional and uni-directional communication. The effect of such a model on performance in randomly deployed wireless sensor networks is studied, specically looking at the effect of a variety of network parameters on network performance. The work in this thesis demonstrates that, when the hybrid communication model is employed, the probability of 100% connectivity improves by almost 90% and that of k-connectivity improves by almost 80% even at low node densities when compared to the traditional omni-directional model. In terms of network security, it was found that the hybrid approach improves network resilience to the collision attack by almost 85% and the cost of launching a successful network partition attack was increased by as high as 600%. The gains in connectivity and resilience were found to improve with increasing node densities and decreasing antenna beamwidths.}, keywords = {mmsn, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Conguring a network system to operate at optimal levels of performance requires a comprehensive understanding of the effects of a variety of system parameters on crucial metrics like connectivity and resilience to network attacks. Traditionally, omni-directional antennas have been used for communication in wireless sensor networks. In this thesis, a hybrid communication model is presented where-in, nodes in a network are capable of both omni-directional and uni-directional communication. The effect of such a model on performance in randomly deployed wireless sensor networks is studied, specically looking at the effect of a variety of network parameters on network performance. The work in this thesis demonstrates that, when the hybrid communication model is employed, the probability of 100% connectivity improves by almost 90% and that of k-connectivity improves by almost 80% even at low node densities when compared to the traditional omni-directional model. In terms of network security, it was found that the hybrid approach improves network resilience to the collision attack by almost 85% and the cost of launching a successful network partition attack was increased by as high as 600%. The gains in connectivity and resilience were found to improve with increasing node densities and decreasing antenna beamwidths. |
2006 |
Chen, Anli Encrypted Media Aggregation in Wireless Sensor Networks 2006. Abstract | BibTeX | Tags: mmsn, MS Thesis, thesis @masterthesis{CheMERep08, title = {Encrypted Media Aggregation in Wireless Sensor Networks}, author = {Anli Chen}, year = {2006}, date = {2006-05-09}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {In historical investigation, sensors devices typically measure simple things such as humility, temperature, or pressure. This results in a fairly limited amount of data generated, even over thousands of sensors. Now, if we look ten years into the future when video capture devices will most likely be small and inexpensive, the ability to create video-based sensor networks will be possible. Previous literature has demonstrated the necessity of in-network data aggregation in order to minimize the volume of messages exchanged in the hierarchical wireless sensor networks. Nevertheless, with the severe power constraints, sensor networks are much more vulnerable to all those threats. A portion of the sensor devices may be physically captured by attackers, or even worse, the crucial elements like cluster heads and aggregators may also stimulate malicious intrusion. Sensor networks are more vulnerable than traditional communication and computation systems to security threats because of their severe power constraints. Furthermore, since crucial elements such as cluster heads and aggregators in sensor networks can often contain information of higher security level, they are more attractive to the attackers and may stimulate more malicious intrusion. Our primary objective is to develop a sufficiently secure, efficient, adaptive, and resilient mechanism for media aggregation within wireless sensor networks. After substantial investigation, we assert that one effective way to balance security with resource limitations for secure aggregation is to employ aggregation functions with the homomorphic attribute. This essentially means that aggregation can occur directly on ciphertext (i.e. encrypted media) opposed to plaintext (i.e. raw unencrypted data). We propose to adopt Statistical Disclosure Control (SDC), Secure Multiparty Computation (SMC) and Discrete Wavelet Transform (DWT) techniques to boost secure aggregation. We tailor the three methods into wireless sensor network scenario so that encrypted media can be securely aggregated. The result of this study allows the network administrator to adaptively select the most appropriate securing method that assures adequate protection according to their practical needs and environmental condition.}, keywords = {mmsn, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } In historical investigation, sensors devices typically measure simple things such as humility, temperature, or pressure. This results in a fairly limited amount of data generated, even over thousands of sensors. Now, if we look ten years into the future when video capture devices will most likely be small and inexpensive, the ability to create video-based sensor networks will be possible. Previous literature has demonstrated the necessity of in-network data aggregation in order to minimize the volume of messages exchanged in the hierarchical wireless sensor networks. Nevertheless, with the severe power constraints, sensor networks are much more vulnerable to all those threats. A portion of the sensor devices may be physically captured by attackers, or even worse, the crucial elements like cluster heads and aggregators may also stimulate malicious intrusion. Sensor networks are more vulnerable than traditional communication and computation systems to security threats because of their severe power constraints. Furthermore, since crucial elements such as cluster heads and aggregators in sensor networks can often contain information of higher security level, they are more attractive to the attackers and may stimulate more malicious intrusion. Our primary objective is to develop a sufficiently secure, efficient, adaptive, and resilient mechanism for media aggregation within wireless sensor networks. After substantial investigation, we assert that one effective way to balance security with resource limitations for secure aggregation is to employ aggregation functions with the homomorphic attribute. This essentially means that aggregation can occur directly on ciphertext (i.e. encrypted media) opposed to plaintext (i.e. raw unencrypted data). We propose to adopt Statistical Disclosure Control (SDC), Secure Multiparty Computation (SMC) and Discrete Wavelet Transform (DWT) techniques to boost secure aggregation. We tailor the three methods into wireless sensor network scenario so that encrypted media can be securely aggregated. The result of this study allows the network administrator to adaptively select the most appropriate securing method that assures adequate protection according to their practical needs and environmental condition. |
2005 |
Budhia, Udit Steganalysis of Video Sequences using Collusion Sensitivity 2005. Abstract | BibTeX | Tags: forensics, MS Thesis, thesis @masterthesis{BudMSThesis05, title = {Steganalysis of Video Sequences using Collusion Sensitivity}, author = {Udit Budhia}, year = {2005}, date = {2005-05-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {In this thesis we present an effective steganalysis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this thesis we present methods that overcome this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. In particular we target the spread spectrum steganography method because of its widespread use. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking and more sophisticated pattern recognition tools. Through analysis and simulations we, evaluate the effectiveness of the video steganalysis method based on averaging based collusion scheme. Other forms of collusion attack in the form of weighted linear collusion and block-based collusion schemes have been proposed to improve the detection performance. The proposed steganalsyis methods were successful in detecting hidden watermarks bearing low SNR with high accuracy. The simulation results also show the improved performance of the proposed temporal based methods over the spatial methods. We conclude that the essence of future video steganalysis techniques lies in the exploitation of the temporal redundancy.}, keywords = {forensics, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } In this thesis we present an effective steganalysis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this thesis we present methods that overcome this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. In particular we target the spread spectrum steganography method because of its widespread use. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking and more sophisticated pattern recognition tools. Through analysis and simulations we, evaluate the effectiveness of the video steganalysis method based on averaging based collusion scheme. Other forms of collusion attack in the form of weighted linear collusion and block-based collusion schemes have been proposed to improve the detection performance. The proposed steganalsyis methods were successful in detecting hidden watermarks bearing low SNR with high accuracy. The simulation results also show the improved performance of the proposed temporal based methods over the spatial methods. We conclude that the essence of future video steganalysis techniques lies in the exploitation of the temporal redundancy. |
2004 |
Luh, William Collusion-Resistant Fingerprinting for Multimedia in a Broadcast Channel Environment 2004. Abstract | BibTeX | Tags: drm, MS Thesis, thesis @masterthesis{LuhMSThesis04, title = {Collusion-Resistant Fingerprinting for Multimedia in a Broadcast Channel Environment}, author = {William Luh}, year = {2004}, date = {2004-12-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {Digital fingerprinting is a method by which a copyright owner can uniquely embed a buyer-dependent, inconspicuous serial number (representing the fingerprint) into every copy of digital data that is legally sold. The buyer of a legal copy is then deterred from distributing further copies, because the unique fingerprint can be used to trace back the origin of the piracy. The major challenge in fingerprinting is collusion, an attack in which a coalition of pirates compare several of their uniquely fingerprinted copies for the purpose of detecting and removing the fingerprints. The contributions of this thesis are two-fold. First, this thesis motivates the need for robustness against large coalitions of pirates by introducing the concept of a malicious distributor that has been overlooked in prior work. A novel fingerprinting code that has superior codeword length in comparison to existing work under this novel malicious distributor scenario, is developed. In addition, ideas presented in the proposed ¯ngerprinting design can easily be applied to existing fingerprinting schemes, making them more robust to collusion attacks. Second, a new framework termed Joint Source Fingerprinting that integrates the processes of watermarking and codebook design is introduced. The need for this new paradigm is motivated by the fact that existing ¯ngerprinting methods result in a perceptually undistorted multimedia after collusion is applied. In contrast, the new paradigm equates the process of collusion amongst a coalition of pirates, to degrading the perceptual characteristics, and hence commercial value of the multimedia in ques- tion. Thus by enforcing that the process of collusion diminishes the commercial value of the content, the pirates are deterred from attacking the ¯ngerprints. A ¯ngerprint- ing algorithm for video as well as an e±cient means of broadcasting or distributing ¯ngerprinted video is also presented. Simulation results are provided to verify our theoretical and empirical observations.}, keywords = {drm, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Digital fingerprinting is a method by which a copyright owner can uniquely embed a buyer-dependent, inconspicuous serial number (representing the fingerprint) into every copy of digital data that is legally sold. The buyer of a legal copy is then deterred from distributing further copies, because the unique fingerprint can be used to trace back the origin of the piracy. The major challenge in fingerprinting is collusion, an attack in which a coalition of pirates compare several of their uniquely fingerprinted copies for the purpose of detecting and removing the fingerprints. The contributions of this thesis are two-fold. First, this thesis motivates the need for robustness against large coalitions of pirates by introducing the concept of a malicious distributor that has been overlooked in prior work. A novel fingerprinting code that has superior codeword length in comparison to existing work under this novel malicious distributor scenario, is developed. In addition, ideas presented in the proposed ¯ngerprinting design can easily be applied to existing fingerprinting schemes, making them more robust to collusion attacks. Second, a new framework termed Joint Source Fingerprinting that integrates the processes of watermarking and codebook design is introduced. The need for this new paradigm is motivated by the fact that existing ¯ngerprinting methods result in a perceptually undistorted multimedia after collusion is applied. In contrast, the new paradigm equates the process of collusion amongst a coalition of pirates, to degrading the perceptual characteristics, and hence commercial value of the multimedia in ques- tion. Thus by enforcing that the process of collusion diminishes the commercial value of the content, the pirates are deterred from attacking the ¯ngerprints. A ¯ngerprint- ing algorithm for video as well as an e±cient means of broadcasting or distributing ¯ngerprinted video is also presented. Simulation results are provided to verify our theoretical and empirical observations. |
Dube, Raghav Denial of Service Attacks: Path Reconstruction for IP Traceback using Adjusted Probabilistic Packet Marking 2004. Abstract | BibTeX | Tags: forensics, MS Thesis, thesis @masterthesis{DubMSThesis04, title = {Denial of Service Attacks: Path Reconstruction for IP Traceback using Adjusted Probabilistic Packet Marking}, author = {Raghav Dube}, year = {2004}, date = {2004-12-01}, address = {College Station, TX}, school = {Texas A&M University}, abstract = {The use of Internet has revolutionized the way information is exchanged, changed business paradigms and put mission critical and sensitive systems online. Any disruption of this connectivity and the plethora of services provided results in significant damages to everyone involved. Denial of Service (DoS) attacks are becoming increasingly common and are the cause of lost time and revenue. Flooding type DoS attacks use spoofed IP addresses to disguise the attackers. This makes identification of the attackers extremely difficult. This work proposes a new scheme that allows the victim of a DoS attack to identify the correct origin of the malicious traffic. The suggested mechanism requires routers to mark packets using adjusted probabilistic marking. This results in a lower number of packet-markings required to identify the traffic source. Unlike many related works, we use the existing IPv4 header structure to incorporate these markings. We simulate and test our algorithms using real Internet trace data to show that our technique is fast, and works successfully for a large number of distributed attackers.}, keywords = {forensics, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } The use of Internet has revolutionized the way information is exchanged, changed business paradigms and put mission critical and sensitive systems online. Any disruption of this connectivity and the plethora of services provided results in significant damages to everyone involved. Denial of Service (DoS) attacks are becoming increasingly common and are the cause of lost time and revenue. Flooding type DoS attacks use spoofed IP addresses to disguise the attackers. This makes identification of the attackers extremely difficult. This work proposes a new scheme that allows the victim of a DoS attack to identify the correct origin of the malicious traffic. The suggested mechanism requires routers to mark packets using adjusted probabilistic marking. This results in a lower number of packet-markings required to identify the traffic source. Unlike many related works, we use the existing IPv4 header structure to incorporate these markings. We simulate and test our algorithms using real Internet trace data to show that our technique is fast, and works successfully for a large number of distributed attackers. |
Mathai, Nebu John 0.18 CMOS Implementation of a Video Watermarking Algorithm 2004. Abstract | BibTeX | Tags: drm, MS Thesis, thesis @masterthesis{MatMSThesis04, title = {0.18 CMOS Implementation of a Video Watermarking Algorithm}, author = {Nebu John Mathai}, year = {2004}, date = {2004-12-01}, address = {Toronto, Canada}, school = {University of Toronto}, abstract = {We consider hardware implementation aspects of the digital watermarking problem through the implementation of a well-known video watermarking algorithm called Just Another Watermarking System (JAWS); we discuss the time and area constraints that must be satisfied by a successful hardware implementation. A hardware architecture that implements the algorithm under the constraints is then proposed. The architecture is analyzed to gain an understanding of the relationships between algorithmic features and implementation cost. Some general findings of this work that can be applied toward making algorithmic developments more amenable to hardware implementation are finally presented.}, keywords = {drm, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } We consider hardware implementation aspects of the digital watermarking problem through the implementation of a well-known video watermarking algorithm called Just Another Watermarking System (JAWS); we discuss the time and area constraints that must be satisfied by a successful hardware implementation. A hardware architecture that implements the algorithm under the constraints is then proposed. The architecture is analyzed to gain an understanding of the relationships between algorithmic features and implementation cost. Some general findings of this work that can be applied toward making algorithmic developments more amenable to hardware implementation are finally presented. |
2003 |
Zhao, Yang Dual Domain Semi-Fragile Watermarking for Image Authentication 2003. Abstract | BibTeX | Tags: drm, forensics, MS Thesis, thesis @masterthesis{ZhaMSThesis03, title = {Dual Domain Semi-Fragile Watermarking for Image Authentication}, author = {Yang Zhao}, year = {2003}, date = {2003-12-01}, address = {Toronto, Canada}, school = {University of Toronto}, abstract = {Techniques to establish the authenticity and integrity of digital images are becoming increasingly essential for secure transacting. Ideally, the authentication algorithm should distinguish incidental integrity maintaining distortions such as lossy compression from malicious manipulations. This has motivated research into semi-ragile watermarking. A novel watermarking algorithm is proposed in this thesis that is both robust to compression and self-authenticating. The proposed algorithm is a content-based, semi-fragile watermarking method that employs a public-key scheme for still image authentication and integrity veri¯cation. The use of dual domains in the proposed algorithm enables greater control over the robustness and fragility of the overall scheme to manipulations, and provides very good classi¯cation of intentional and incidental tampering. In addition, the thesis provides theoretical analysis for the performance and the feasibility of the scheme. We also present experimental results to verify the theoretical observations and the comparison results for the proposed algorithm to four popular techniques.}, keywords = {drm, forensics, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Techniques to establish the authenticity and integrity of digital images are becoming increasingly essential for secure transacting. Ideally, the authentication algorithm should distinguish incidental integrity maintaining distortions such as lossy compression from malicious manipulations. This has motivated research into semi-ragile watermarking. A novel watermarking algorithm is proposed in this thesis that is both robust to compression and self-authenticating. The proposed algorithm is a content-based, semi-fragile watermarking method that employs a public-key scheme for still image authentication and integrity veri¯cation. The use of dual domains in the proposed algorithm enables greater control over the robustness and fragility of the overall scheme to manipulations, and provides very good classi¯cation of intentional and incidental tampering. In addition, the thesis provides theoretical analysis for the performance and the feasibility of the scheme. We also present experimental results to verify the theoretical observations and the comparison results for the proposed algorithm to four popular techniques. |
Squeira, Adrian Enhanced Watermark Detection 2003. Abstract | BibTeX | Tags: drm, MS Thesis, thesis @masterthesis{SeqMSThesis03, title = {Enhanced Watermark Detection}, author = {Adrian Squeira}, year = {2003}, date = {2003-12-01}, address = {Toronto, Canada}, school = {University of Toronto}, abstract = {Digital watermarking is a relatively overhead free solution to the problem of copyright infringement. In this thesis we investigate the choice of transform domain for embedding blind and non-blind watermarks in the face of eight different attacks. The chosen attacks are commonly used in watermark benchmarking programs. After extensive simulations involving seventeen different transforms, we find that our findings corroborate the results obtained by Ramkumar et al. for compression attacks. In addition, we analyse the Voloshynovskiy scheme for its probability of false alarm in a novel way. We then use the transform domain chosen from above and introduce the use of the SAGE algorithm as a parameter estimator. This algorithm is used to lower the probability of false alarm for the Voloshynovskiy scheme by improving the accuracy of parameter estimation and therefore lowering the variance of the detector output.}, keywords = {drm, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Digital watermarking is a relatively overhead free solution to the problem of copyright infringement. In this thesis we investigate the choice of transform domain for embedding blind and non-blind watermarks in the face of eight different attacks. The chosen attacks are commonly used in watermark benchmarking programs. After extensive simulations involving seventeen different transforms, we find that our findings corroborate the results obtained by Ramkumar et al. for compression attacks. In addition, we analyse the Voloshynovskiy scheme for its probability of false alarm in a novel way. We then use the transform domain chosen from above and introduce the use of the SAGE algorithm as a parameter estimator. This algorithm is used to lower the probability of false alarm for the Voloshynovskiy scheme by improving the accuracy of parameter estimation and therefore lowering the variance of the detector output. |
2002 |
Ahsan, Kamran Covert Channel Analysis and Data Hiding in TCP/IP 2002. Abstract | BibTeX | Tags: forensics, MS Thesis, thesis @masterthesis{AhsMAScThesis02, title = {Covert Channel Analysis and Data Hiding in TCP/IP}, author = {Kamran Ahsan}, year = {2002}, date = {2002-08-01}, abstract = {This thesis investigates the existence of covert channels in computer networks by analyzing the transport and the Internet layers of the TCP/IP protocol suite. Two approaches for data hiding are identified: packet header manipulation and packet sorting. Each scenario facilitates the interaction of steganographic principles with the existing network security environment. Specifically, we show how associating additional information with IPv4 headers can ease up security mechanisms in network nodes like routers, firewalls and for services such as authentication, audit, and billing. Furthermore, use of packet sorting with the IP Sec framework results in an enhanced network security architecture. The packet sorting approach is simulated at the network layer which provides a feasibility of packet sorting under varying network conditions. While bridging the areas of data hiding, network protocols and network security, both techniques have potential for practical data hiding at the transport and network layers.}, keywords = {forensics, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } This thesis investigates the existence of covert channels in computer networks by analyzing the transport and the Internet layers of the TCP/IP protocol suite. Two approaches for data hiding are identified: packet header manipulation and packet sorting. Each scenario facilitates the interaction of steganographic principles with the existing network security environment. Specifically, we show how associating additional information with IPv4 headers can ease up security mechanisms in network nodes like routers, firewalls and for services such as authentication, audit, and billing. Furthermore, use of packet sorting with the IP Sec framework results in an enhanced network security architecture. The packet sorting approach is simulated at the network layer which provides a feasibility of packet sorting under varying network conditions. While bridging the areas of data hiding, network protocols and network security, both techniques have potential for practical data hiding at the transport and network layers. |
2001 |
Su, Karen Digital Video Watermarking Principles for Resistance to Collusion and Interpolation Attacks 2001. Abstract | BibTeX | Tags: drm, MS Thesis, thesis @masterthesis{SuMAScThesis01, title = {Digital Video Watermarking Principles for Resistance to Collusion and Interpolation Attacks}, author = {Karen Su}, year = {2001}, date = {2001-09-01}, address = {Toronto, Canada}, school = {University of Toronto}, abstract = {In this thesis,w e propose two video watermarks based on the novel ideas of statistical invisibility and content-synchronized placement. We present a mathematical framework designed to facilitate collusion analysis and thereby enable protection from this important class of attacks. We define statistical invisibility and show that it is not only a property that supports the desired resistance to such attacks,but that it can also be easily induced using a spatially localized image-dependent approach. To construct the watermark,the notion of a watermark’s footprint,the spatial locations over which its energy is spread,is introduced. By defining localized footprints with regular structures,e.g.,sets of subframes within each frame, current image watermarks can immediately be applied at the subframe level. Results are presented to demonstrate the effectiveness of the algorithms. Comparisons are made with the well-known JAWS and CDMA video watermarks,as well as the StirMark 3.1 benchmarking suite.}, keywords = {drm, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } In this thesis,w e propose two video watermarks based on the novel ideas of statistical invisibility and content-synchronized placement. We present a mathematical framework designed to facilitate collusion analysis and thereby enable protection from this important class of attacks. We define statistical invisibility and show that it is not only a property that supports the desired resistance to such attacks,but that it can also be easily induced using a spatially localized image-dependent approach. To construct the watermark,the notion of a watermark’s footprint,the spatial locations over which its energy is spread,is introduced. By defining localized footprints with regular structures,e.g.,sets of subframes within each frame, current image watermarks can immediately be applied at the subframe level. Results are presented to demonstrate the effectiveness of the algorithms. Comparisons are made with the well-known JAWS and CDMA video watermarks,as well as the StirMark 3.1 benchmarking suite. |
Fei, Chuhong The Choice of Transform for Robust Watermarking in the Presence of Lossy Compression 2001. Abstract | BibTeX | Tags: drm, MS Thesis, thesis @masterthesis{FeiMAScThesis01, title = {The Choice of Transform for Robust Watermarking in the Presence of Lossy Compression}, author = {Chuhong Fei}, year = {2001}, date = {2001-04-01}, address = {Toronto, Canada}, school = {University of Toronto}, abstract = {Digital watermarking technology is an approach for the protection of digital information against illegal duplication and manipulation. In this thesis, we concentrate on the problem of robust watermarking in the presence of lossy compression. We investigate how the embedding of the watermark signal in a suitable transform domain can improve performance. Two typical classes of watermarking techniques are considered: one is the spread spectrum watermarking method, the other is the quantization based watermarking method. Based on a communication paradigm for watermarking, we present and information-theoretic approach to estimate the number of watermark bits that can be reliably hidden. The best domain for watermarking is determined to maximize the watermark channel capacity. Based on the advantages and disadvantages of both watermarking methods, a novel hybrid watermarking technique is proposed which combines the best of both spread spectrum and quantization based methods.}, keywords = {drm, MS Thesis, thesis}, pubstate = {published}, tppubtype = {masterthesis} } Digital watermarking technology is an approach for the protection of digital information against illegal duplication and manipulation. In this thesis, we concentrate on the problem of robust watermarking in the presence of lossy compression. We investigate how the embedding of the watermark signal in a suitable transform domain can improve performance. Two typical classes of watermarking techniques are considered: one is the spread spectrum watermarking method, the other is the quantization based watermarking method. Based on a communication paradigm for watermarking, we present and information-theoretic approach to estimate the number of watermark bits that can be reliably hidden. The best domain for watermarking is determined to maximize the watermark channel capacity. Based on the advantages and disadvantages of both watermarking methods, a novel hybrid watermarking technique is proposed which combines the best of both spread spectrum and quantization based methods. |