DSpace Community:http://inet.vidyasagar.ac.in:8080/jspui/handle/123456789/3972024-03-29T01:42:21Z2024-03-29T01:42:21ZPrioritization of Multi-Sensor Tracked DataKaity, Souravhttp://inet.vidyasagar.ac.in:8080/jspui/handle/123456789/61082021-08-16T08:20:31Z2021-07-26T00:00:00ZTitle: Prioritization of Multi-Sensor Tracked Data
Authors: Kaity, Sourav
Abstract: Abstract In the real scenario, accurate tracking of moving objects is very much essential
for surveillance, performance analysis of any airborne vehicles, detection
of any inbound threat, engagement of anti threat equipment, detecting the origin
of the enemy threat launch point etc. Tracking Radar system, Electro-Optical
tracking systems and passive target tracking systems are well known for moving
object tracking systems. All these tracking systems are widely used throughout
the globe. The measurement accuracy of moving object location and reliability of
the measured location are based on two critical factors. To achieve a more reliable
result instead of one, multiple sensors are normally used. If all measurements are
in agreement with each other then reliability increases. Some times it may happen
that one or more sensors capture erroneous measurements. In such a scenario if
sensors are identified and eliminated then reliability of measurement can be significantly
improved. Each sensor has its measurement accuracy level. To increase
the measurement accuracy it is necessary to identify the error contributing factors
of all sensors and impacts of the same. An efficient data fusion algorithm can be
applied to get accurate measurement. The time efficiency of the algorithm is also a
prime concern as all the system measurements are used for real-time applications.
We have focused our research on three different kinds of tracking system.
These three tracking systems are namely Electro-Optical Tracking System (EOTS),Tracking Radar System and Passive Target Tracking System. Working principles
of all these sensors are different. We have worked with all of these sensors and
tried to find out the best possible accuracy model for each. These models have
significantly improvd the accuracy and at the same time helped in calculating the
error boundary. Another important contribution focuses on a real-time remote visualization
system is to know the real-time update of the moving object location.
In this research work, multiple EOS works together to produce the object
location. In this each two EOS measurement combination can be used to compute
object position. More than two numbers of sensors can produce a more reliable
result. But in case, any one of the sensors is erroneous then the whole system
becomes unreliable. Three different models viz “Prioritization and Elimination
of erroneous sensor using perpendicular distance method”, “Improvement in the
Accuracy of the Moving Object Position by Eliminating Erroneous Sensors using
Clustering Approach”, “Multi Sensor Data Fusion Technique for Target Tracking
Based on the Combination of Triangulation Method and K-means Algorithm” are
established for identifying one or more erroneous sensors. And all these models
are proved to successfully eliminate erroneous sensor(s) and produce accurate
object location.
In case of multiple Radar scenario, all radars measure the object location as
per their accuracy. Our research work focused on the factors and their impacts on
the Tracking Radar measurement accuracy to develop the model viz Analysis of
Factors and their Impacts in Measurement Accuracy and Prioritisation of Radars.
The model quantify the measurement accuracy. To improve the accuracy we have
established a model viz Multiple Radar Data Fusion to Improve the Accuracy in
Position Measurement Based on Clustering Algorithm. At first it identifies presence
of any erroneous measurement. After elimination, an efficient data fusion technique have been applied to produce accurate position measurement.
A passive target tracking system is a combination of at least four numbers
of time synchoronised receivers. Here we have established a “time difference of
arrival” algorithm to find out the object location based on the time difference of
arrival of the electro-magnetic signals from the target. Accuracy of the object position
measurement depends upon the geographical location and time sync accuracy
of the receivers. A model “Prioritization of Receivers for Minimum Possible
Error Boundary in Time Difference of Arrival Algorithm” is established to find
out the best possible combination of four receivers, in case there are more than
four receivers. This model also finds the error boundary of the measurement and
co-relation between error factor and the range of the moving object.
In the research work different techniques are adopted and improved for finding
the erroneous sensor based on the unique error contributing factors of all three
kinds of sensors. The Electro-Optical Tracking System, Tracking Radar System
and Passive Target Tracking System are prioritized based on multiple critical criteria
so that the best sensor can be used for data fusion and the most accurate result
can be achieved. Different time efficient clustering algorithm is defined based on
the tracking principle of each kind of tracking systems. And eash case clustering
algorithm is efficiently implemented for eliminating erroneous sensors as well as
grouping the best sensors for improvement in measurement. A numbers of experiments
were carried out in this research work for all three kinds of tracking
sensors to establish all the algorithms. The results obtained in all the experiments
were satisfactory. Real-time remote visualization of the measured parameters is
also an important task for monitoring and the same is analysed and discussed in
the thesis in detail. Overall all these techniques and systems performances were
tested rigorously with simulation to produce reliable accurate results in real-time.2021-07-26T00:00:00ZDesign of an effective Congestion Control Routing Protocol for Mobile-Ad-Hoc NetworkSingha, Soamdeephttp://inet.vidyasagar.ac.in:8080/jspui/handle/123456789/61062021-08-13T09:33:01Z2021-07-26T00:00:00ZTitle: Design of an effective Congestion Control Routing Protocol for Mobile-Ad-Hoc Network
Authors: Singha, Soamdeep
Abstract: A Mobile Ad hoc Networks (MANETs) is an infrastructure-less self-con guring network
in that nodes themselves create and manage the network in a self-organized manner.
Mobile Ad hoc networks play an important role in the deployment of future wireless
communication systems. MANET in todays world nds its use in disaster management,
military applications and other emergency operations. MANET demands great
performance requirements in recent years due to the increased use of streaming multimedia
applications. To meet these requirements, the existing routing protocols should
provide data transfer with minimal delay, packet loss and jitter in a bandwidth restricted
environment. A MANET inherently depends on the routing scheme employed
to provide expected Quality of Service (QoS). Many congestion control routing protocols
have been developed in the past to address these issues such as Dynamic Source
Routing (DSR), Ad-hoc on-Demand Distance Vector (AODV), Zone Routing Protocol
(ZRP) and Temporally Ordered Routing Algorithm (TORA). However, the capability
of this traditional protocol to support streaming multimedia applications is limited. In
the present investigation, we have proposed different approaches of Random Early Detection
(RED) through queue management to design of an effective congestion control
routing protocol for MANET. RED is a powerful mechanism for controlling traffic. It
can provide better network utilization than Drop-Tail if properly used, but can induce
network instability and major tra c disruption if not properly con gured. RED con guration
has been a problem since its rst proposal, and many have tried to address
this topic. Unfortunately, most of the studies propose RED con gurations (optimal
sets of RED parameters) based on heuristics and simulations, and not on a systematic
approach. Their common problem is that each proposed con guration is only good
for the particular tra c conditions studied, but may have detrimental e ects if used
in other conditions. In this study, we propose a general method for con guring RED
congestion control modules, based on a model of active queue management (AQM).
In this dissertation, six new congestion control models Model-1: Application of Dynamic Weight with Distance to Improve the Performance of RED (ADWD-REDIP),
Model-2: Active Queue Management in RED to Reduce Packet Loss (AQMRED-RPL),
Model-3: A Predictable Active Queue Management to Reduce Sensitivity
of RED Parameter (PAQM-RS-RED), Model-4: An Innovative Active Queue Management
Model Through Threshold Adjustment Using Queue Size (IAQM-TA-QZ),
Model-5: A Nobel Congestion Control Algorithm Using Bu er Occupancy RED (CCABO-RED),
Model-6: Active Queue Management in RED considering Critical Point on
Target Queue (AQM-RED-CPTQ) have been introduced to improved the performance
in RED. The Model-1 (ADWD-RED-IP) is proposed where the dynamic weight parameter D
q is presented with a probability of P to increase the RED efficiency. Next
the Model-2(AQM-RED-RPL) is designed where less packet drop achieved by making
many re nements and monitoring both the average queue size and the immediate
queue size of the packet dropping function. After that Model-3 (PAQM-RS-RED)
has been suggested which can also be incorporated as a clear demonstration in under
RED routers, eliminates the sensitivity to variables that in
uence the functioning of
RED and in a broad range of tra c situations can reach a clearly de ned target average
queue length reliable. Then, Model -4 (IAQM-TA-QZ) provides an algorithm
that adapts the threshold parameters and probability of packet drop as per the load
condition of tra c. The next Model-5 (CCA-BO-RED) which measure the rate of
occupancy of the queue and treat it as a congestion parameter that will be predicted
when the queue is crowded. This method is used to modify RED variables dynamic.
Finally, we have proposed Model-6 AQM-RED-CPTQ). In order to provide greater
congestion management over the network while also preserving the value of RED, it
works to enhance these criteria. This model will introduce Critical Point on Target
Queue and some traits of RED and its variations.
q
This research analyzes the performance of the proposed congestion control Ad hoc
routing protocols such as Random Early Detection (RED) and Variation of RED using Network Simulator Version no. 2 (NS- 2). The simulation is carried out with 100 nodes.
Network tra c scenarios one with 10 connections and other with 20 connections are
considered. The simulation area is 400 x 400 and 600 x 1000 meters and the mobility
speeds xed are 10 m/s and 20 m/s. The performance of the above routing protocols
was analyzed in Random Waypoint, Random Walk and Random Direction Mobility
Models. The packet delivery ratio and the end-to-end delay for varying number of
sources has been evaluated with respect to the parameters such as node speed, Network
Tra c and Node Density. The comparative study pointed out the relative strengths
and weakness of those congestion control Ad hoc routing protocols.
In the present research, various methodologies have been introduced to improve the
existing routing schemes for congestion control with the help of active queue management.
We have compared our proposed schemes with the some of the popular existing
scheme like RED, ERED, SRED, REM, BLUE, LDC, and FREED. It has been observed
that End to End delay, Packet Delivery Ratio, the number of packet drop count
is calculated and shown better than other existing schemes.2021-07-26T00:00:00ZDesign and Analysis of Image Steganographic ProtocolChowdhuri, Parthahttp://inet.vidyasagar.ac.in:8080/jspui/handle/123456789/57452021-02-10T09:34:07Z2021-02-08T00:00:00ZTitle: Design and Analysis of Image Steganographic Protocol
Authors: Chowdhuri, Partha
Abstract: In today's Internet era, secure data communication is vital and indispensable. Image
steganography is one of the most popular and widely used techniques to protect valuable
information from illegitimate access. The quality of the stego image obtained from
any steganographic scheme is inversely proportional to its data hiding capacity. This
poses a challenge for the prospective researcher to balance a good trade-o among the
quality of stego image, embedding capacity and robustness. Moreover, it is not only the
extraction of the secret message from stego image but, the reconstruction of the original
image from stego is also of paramount importance for many human centric applications
such as tactical communication, health care, e-governance, commercial security, and
intellectual property rights etc. In the last two decades, researchers around the globe
have tried to resolve these problems to some extent but have not achieved a signi cant
level of success. In order to overcome these issues, some new image steganographic
schemes have been designed in spatial domain. These schemes maintain a good balance
between stego image quality, embedding capacity and robustness.
Two single image based steganographic schemes have been designed and implemented
using graph neighbourhood, and pixel value difference. These schemes produce good
quality stego image along with high embedding capacity. To increase the embedding
capacity, robustness and to achieve reversibility, some dual image based steganographic
schemes have been designed using graph neighbourhood and weighted matrix. In these
schemes, the use of dual image and image interpolation techniques help to increase the
data hiding capacity, improve visual quality and enhance the security.
To strengthen the robustness under compressed environment, some novel steganographic
schemes have been developed in transform domain using Discrete Cosine Transform
and Discrete Wavelet Transform. To encounter the extent of distortion to the
coefficient of transform domain, a weighted matrix is introduced to maintain good
trade-o between quality and robustness. Further, some standard steganalysis techniques have been used to examine the proposed methods and tested under some steganographic attacks to analyze the robustness
of the schemes because designing a new scheme is not enough, rather the analysis of
its impact in terms of security and robustness is very much important that would
determine whether it can be advocated globally or not.2021-02-08T00:00:00ZTechniques for DNA sequences compression and encryptionHossein, Syed Mahamudhttp://inet.vidyasagar.ac.in:8080/jspui/handle/123456789/55832020-12-23T08:40:37Z2020-10-12T00:00:00ZTitle: Techniques for DNA sequences compression and encryption
Authors: Hossein, Syed Mahamud
Abstract: The purpose of this research is how to get lossless compression encryption in millisecond of completion. Some notable research challenges are about storing, transferring and safety of deoxyribonucleic Acid (DNA) order. Although pattern matching for text compression has been observed for a few years and many publications are available in literatures, there is still space to enhance the effectiveness in terms of both compression & encryption. Human beings are always fond of acquiring more and more information in least possible time and space. Nowadays sending (power and so on) of DNA/RNA/protein order especially over wireless network is very common.
The DNA database size increases greatly, changing from millions to billions annually. Therefore, for storing, searching the DNA database needed a systematic lossless compression and encryption algorithm for safe transmission. In the field of Bioinformatics the storing and transmission of DNA is very important with respect to compression rate, ratio and encryption point of view. The DNA order needs greater space for storing & more time for encryption causing much loss of time in sending of information.
The recurrence of the DNA short pattern are the highest qualities in biological orders. The offered compression algorithm will be based on combinations of REPEAT, REVERSE, GENETIC PALINDROME & PALINDROME. Another offered compression & selection encryption algorithm have modified Huffman’s and RSA algorithm. This algorithm is based on searching exact repetitions, substring substitution by corresponding ASCII code and producing library file, as an outcome there is cumulating of data facts. In this method the data is safe by using the ASCII code for information interchange value and producing the library file which act as a signature. The Huffman's algorithm is used on the output in the first stage of the repetition method and also including the change of Huffman’s tree level & node position for encryption.
It can give the safety of facts, the sudden coded value is essential for decoding by using only certain coded value allotted by encoded time. This offered method safeguards the sequence by applying ASCII symbol, it is user friendly. This type of security is provided in tier one. In tier two selective encryption techniques are used for higher quality safety.
Form the information point of view the most demanding question nowadays is about the safety of information during transmission. The selection encryption process seems to provide security and this technique is applied on compressed data or in the library file or in both. . The fractional part of a message is encrypted in the selective encryption method, leaving the remaining part unchanged. This is top most important with respect to the selective encryption system. The offered selection encryption makes smaller the computational alternatives of this data. The safety of this is ensured by signature that depends on ASCII code & progressive library file acting as a key. As an outcome of that systematize lossless compression technique, data structure, to store effectively, access secure communication and search the greatly sized data sets are essential. These days DNA/RNA sequence with a complex structure that stores facts of different types at the same time are in common use. The operating time is very less and it depends on the input file size. The assessment of encryption system depends on its rate of motion and levels of safety it gives. The operating time of this algorithm is minimum, needed minute memory and can be facilely used. The mass request is for the need of minimum place for storing and low computational price, so, systematized algorithm is needed for compression encryption.
The compression minimizes the file size and encryption makes certain the safety of a particular file which is to be sent over some uncertain network like the internet. In this age of information sharing and transferring of data have increased to a great extent. Generally the information exchange is done using open narrow ways by making it unsafe to interception. On the other hand, effective information retrieval is needed to quickly discover the relevant information from this huge mass of facts using ready to be used materials.
For that purpose make greater, stronger, more complete six compression algorithms for making shorter greatly sized collections of DNA orders and two selection encryption of modified Huffman’s & RSA are presented. When a user searches for any order for an organism, an encrypted compressed order DNA sequence can be sent from source to user. The encrypt compressed the DNA sequences then can be decrypted & decompressed at the client end producing lower transmission time over the internet.
The experimental results show that our compression-encryption algorithm is in competition with the best algorithms and is almost the fastest among all views when the number of pattern is not very greatly sized. As an outcome of that, this algorithm is desired for general string matching applications. These data structures and algorithms can be used in several situations and experimentally show that they can successfully make an attempt to be placed over with other techniques commonly used in those fields (of knowledge). This work, therefore, is greatly economical and has market potential
This algorithm also experiments on benchmark data and equivalent length of artificial sequences. By applying modified Huffman’s technique the rate & ratio is lowered. It also makes a comparison of the compression technique with published results and selection encryption with RSA algorithms.2020-10-12T00:00:00Z