III MCA First Project Review Results
III MCA students are asked to submit the following documents in the website on or before 4.2.2011
A8711001 Amala devi.T why used cluster in associative algorithm? Why not other algorithm
A8711002 Dhana Lakshmi.P Already hybrid search existing. how yours differs from other
A8711003 Madhubala.S Why used front end Eclipse? Why not other
A8711004 Padmini.A how it can be checked
A8711005 Rajalakshmi.S ------------------------------------
A8711006 Selva Anitha.M sharing of web site privacy policy with the statistics service, how it can be checkedd
A8711007 Sengamala barani.S : How you will demonstrate analysis and design the system where you will
Get the cost
A8711008 Sivagami.N It can’t be provided as 6 month projects
A8711009 Yoga Nandhini.S how you will simulate virtual cloude computing using SOA emphasis
A8711010 Anand babu -------------------------------------------
A8711011 Arun Kumar.P.V Why not link state
A8711012 Guru Barakumar.M ----------------------------------------------
A8711013 B.Karthick Why used tabu search?
A8711014 Karthikeyan.M Why used random walk based search algorithm? Why not other algorithm
A8711015 Mani Muthu.M : How you will simulate the system
A8711016 Mariappan.L : How to implement the project
A8711017 Nagarethina Kumar.B why not other algorithm
A8711018 Rakeshjeyavendhan.K Why not other? Why this algorithm
A8711019 Surendran.B why can’t it applied to other networks why specific to optimal network
III MCA students are asked to submit the following documents in the website on or before 4.2.2011
A8711001 Amala devi.T why used cluster in associative algorithm? Why not other algorithm
A8711002 Dhana Lakshmi.P Already hybrid search existing. how yours differs from other
A8711003 Madhubala.S Why used front end Eclipse? Why not other
A8711004 Padmini.A how it can be checked
A8711005 Rajalakshmi.S ------------------------------------
A8711006 Selva Anitha.M sharing of web site privacy policy with the statistics service, how it can be checkedd
A8711007 Sengamala barani.S : How you will demonstrate analysis and design the system where you will
Get the cost
A8711008 Sivagami.N It can’t be provided as 6 month projects
A8711009 Yoga Nandhini.S how you will simulate virtual cloude computing using SOA emphasis
A8711010 Anand babu -------------------------------------------
A8711011 Arun Kumar.P.V Why not link state
A8711012 Guru Barakumar.M ----------------------------------------------
A8711013 B.Karthick Why used tabu search?
A8711014 Karthikeyan.M Why used random walk based search algorithm? Why not other algorithm
A8711015 Mani Muthu.M : How you will simulate the system
A8711016 Mariappan.L : How to implement the project
A8711017 Nagarethina Kumar.B why not other algorithm
A8711018 Rakeshjeyavendhan.K Why not other? Why this algorithm
A8711019 Surendran.B why can’t it applied to other networks why specific to optimal network
SOAP:
ReplyDeleteSOAP provides a simple and lightweight mechanism for exchanging structured and typed information between peers in a decentralized, distributed environment using XML. SOAP does not itself define any application semantics such as a programming model or implementation specific semantics; rather it defines a simple mechanism for expressing application semantics by providing a modular packaging model and encoding mechanisms for encoding data within modules. This allows SOAP to be used in a large variety of systems ranging from messaging systems to RPC.
SOAP consists of three parts:
• The SOAP envelope construct defines an overall framework for expressing what is in a message; who should deal with it, and whether it is optional or mandatory.
• The SOAP encoding rules efines a serialization mechanism that can be used to exchange instances of application-defined datatypes.
• The SOAP RPC representation defines a convention that can be used to represent remote procedure calls and responses.
SOA:
Service-oriented architecture (SOA) is a flexible set of design principles used during the phases of systems development and integration in computing. A system based on a SOA will package functionality as a suite of interoperable services that can be used within multiple separate systems from several business domains.
SOA also generally provides a way for consumers of services, such as web-based applications, to be aware of available SOA-based services. For example, several disparate departments within a company may develop and deploy SOA services in different implementation languages; their respective clients will benefit from a well understood, well defined interface to access them. XML is commonly used for interfacing with SOA services, though this is not required.
SOA defines how to integrate widely disparate applications for a Web-based environment and uses multiple implementation platforms. Rather than defining an API, SOA defines the interface in terms of protocols and functionality. An endpoint is the entry point for such a SOA implementation.
Service-orientation requires loose coupling of services with operating systems, and other technologies that underlies applications. SOA separates functions into distinct units, or services,[1] which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services. One can consider SOA a continuum, as opposed to distributed computing or modular programming.
Relaxed Multiple Routing Configurations: IP Fast Reroute for Single and Correlated Failures
ReplyDeleteMulti-topology routing is an increasingly popular IP network management concept that allows transport of different traffic types over disjoint network paths. The concept is of particular interest for implementation of IP fast reroute (IP FRR). First, it can support guaranteed, instantaneous recovery from any single link or node failure as well as from many combined failures. Second, different failures result in routing over different network topologies, which gives better control of the traffic distribution in the networks after a failure. The IP FRR scheme based on multi-topology Routing called Multiple Routing Configurations (MRC).
IP fast reroute should provide full protection against all single link and node failures in the network. We propose an improved fast reroute scheme called “relaxed MRC” (rMRC). rMRC does not require that all links are isolated, which results in less constrained routing and gives two important implications:
First, multi-topology routing allows independent setting of link weights in the logical topologies. This implies that traffic can be routed according to a different set of link weights during the recovery phase than during normal operation, allowing independent traffic engineering for each topology. A careful tuning of the link weights in the logical backup topologies can improve the load distribution in the network after a failure has been encountered. We expect rMRC to further improve this ability.
Second, existing proactive recovery schemes are designed to guarantee recovery from single failures only. However, several studies show that multiple simultaneous failures are not uncommon in practice, and that in most cases there is a correlation between the elements that fail together. Such failures are often said to belong to a common Shared Risk Group (SRG). Examples of common failure correlations include IP links sharing the same conduct, fiber, network card, or router. The relaxed
Structure of rMRC makes it flexible enough to develop practical algorithms for fast recovery from SRG failures, provided that the topology remains connected.
The core idea of MRC and rMRC is to have different backup routing topologies in which certain nodes and links are not used for the routing of recovered traffic. If a link or node fails, traffic can still be forwarded in its corresponding backup topologies. The node detecting that the next hop for a packet is not reachable in its current topology just needs to switch the traffic to another still working routing topology.
Our tests indicate that the sparser networks may not always be able to improve the load distribution using link weight optimizations and current backup topology algorithms. As future work, we will explore more advanced heuristic algorithms for backup topology creation, which we believe will together with link weight optimizations improve the load distribution whenever possible. Furthermore, the relaxed fault-tolerant multi-topology routing eases the formal reasoning and could result in better understanding and algorithms for, e.g., multi-fault tolerance
Respected sir,
ReplyDeletei m sengamala barani.My project title is Multi-million dollar maintanance using wls service
Analysis
The analysis phase answers the questions of who will use the system, what the system
will do, and where and when it will be used.
This phase has three steps:
1. An analysis strategy is developed to guide the project team�s efforts. Such a strategy
usually includes an analysis of the current system and its problems, and then ways to design a new system
2. The next step is requirements gathering (e.g., through interviews or questionnaires).
The analysis of this information�in conjunction with input from the
project sponsor and many other people�leads to the development of a concept
for a new system.
3. The analyses, system concept, and models are combined into a document called
the system proposal, which is presented to the project sponsor and other key
decision makers that decide whether the project should continue to move forward.
Design
The design phase decides how the system will operate, in terms of the hardware,
software, and network infrastructure; the user interface, forms, and reports that will
be used
1. The design strategy must be developed. This clarifies whether the system will be
developed by the company�s own programmers, whether it will be outsourced to
another firm or whether the company will buy an existing software package.
2. This leads to the development of the basic architecture design for the system
that describes the hardware, software, and network infrastructure that will be
used.
3. The database and file specifications are developed. These define exactly what
data will be stored and where they will be stored.
4. The analyst team develops the program design, which defines the programs that
need to be written and exactly what each program will do.
Costs calculate:
In this project, we can do online banking transactions. we can avoid manual banking transactions.
Respected sir,
ReplyDeletei m amaladevi.This clustering is subsequently used as a basis for cluster formation which can be used for improving the scalability of services such as routing, network management and security etc. A clustering was presented in this paper that is based on the associativity ticks of the nodes. Based on the use of this cluster for selection of clusterheads, a distributed clustering algorithm named MACA was proposed. The simulation results show that MACA outperforms the existing ones and is also tunable to different kinds of network conditions.
This comment has been removed by the author.
ReplyDeleteRespected sir,
ReplyDeleteE-FRAUD PREVENTION BASED ON THE SELF-AUTHENTICATION OF E-DOCUMENTS
1. Binary Image
In which going to convert the e-document into binary image using Quantization Techniques.
1 Encryption
In which we are going to encrypt the binary image using the RSA Asymmetric Algorithm.
Read the binary plaintext image from a file and compute the size I _ J of the image. Compute a cipher of size I _J using a private key and pre-condition the result. Convolve the binary plaintext image with the preconditioned cipher and normalize the output. Binarize the output obtained in using a threshold based on computing the mode of the Gaussian distributed cipher text. Insert the binary output obtained into the lowest 1-bit layer of the host image and write the result to a file.
2. Hiding image (Stenography)
The conversion of a cipher text to another plaintext form is called Stegotext conversion and is based on the use of Cover text. Some cover text must first be invented and the cipher text mapped on to it in some way to produce the stegotext. This can involve the use of any attribute that is readily available such as letter size, spacing, typeface, or other characteristics of a cover text, manipulated in such a way as to carry a hidden message. Digital images can be used to hide messages in other images. A colour image typically has 8 bits to represent the Red, Green and Blue components. Each colour component is composed of 256 ‘colour values’ (for a 24-bit image) and the modification of some of these values in order to hide other data are undetectable by the human eye. This modification is often undertaken by changing the least significant bit in the binary representation of color or grey level value (for grey level digital images).For example, for 7-bit ASCII conversion, the grey level value 100 has the binary representation 1100100 which is equivalent to character d. If we change the least significant bit to give 1100101 (which corresponds to a grey level value of 101 and character e) then the difference in the output image will not be discernable even though we have replaced the letter e with default character d. In this way, the least significant bit can be used to encode information other than pixel intensity and the larger the host image compared with the hidden message, the more difficult it is to detect the message.
3. Decryption
Read the watermarked image from a file and extract the lowest 1-bit layer from the image.
Regenerate the (non-preconditioned) cipher using the same key used in Algorithm. Correlate the cipher with the input and normalise the result. Quantize and format the output from write to a file.
4. Extracting Original Document
In this module extract the Original Document which is hided in the host image using Stenography techniques.
by
padmini
III mca
Feature selection is a search problem for an “optimal” subset of features. The class separability is normally used as one of the basic feature selection criteria. Instead of maximizing the class separability as in the literature, this work adopts a criterion aiming to maintain the discriminating power of the data.
ReplyDeleteThe key challenge in hybrid search is to estimate the number of peers that can answer a given query. Existing approaches assume that such a number can be directly obtained by computing item popularity. In this work, we show that such an assumption is not always valid, and previous designs cannot distinguish whether items related to a query are distributed in many peers or are in a few peers. To address this issue, we propose QRank, a difficulty-aware hybrid search, which ranks queries by weighting keywords based on term frequency. Using rank values, QRank selects proper search strategies for queries. We conduct comprehensive trace-driven simulations to evaluate this design. Results show that QRank significantly improves the search quality as well as reducing system traffic cost compared with existing approaches.
A hybrid Access 2010/sharepoint Access Services app seems to handle concurrent edit conflicts in two different manners, depending on how the various computers are connected. if Computer A and Computer B get the current data for a record, and both make edits at the approx same time, and save the record; the computer that commits it's data to sharepoint second gets a very simple dialog stating that the record has been changed by another user, and offers options to Save Record, copy to clipboard, or Drop Changes. This behavior is like that one sees when two users have the same mdb or accdb open at the same time and both edit the same record. Simple, but not very informative.
If Computer A and Computer B get the current data for a record, and one of them goes offline, and then both pcs update the same record; when the disconnected pc goes back online and is able to sync, a bona fide conflict resolution screen opens up. It shows data from both db instances side by side, and one keeps one or the other. This is more informative; possibly more useful.
The emphasis was on XML integration across enterprise boundaries. By contrast, SOA tends to focus on the architecture of a single enterprise—or closely related enterprises—where the underlying assumption is that everything occurs within one, big trusted zone. Although SOA shifts the emphasis toward internal architecture, B-to-B integration with partners is a natural extension— and in many cases a core benefit. Across firewalls, the solution can be as simple as a two-way SSL connection. “Something bad has to happen before SOA security tools really start happening,” Laird says. “We’ll see XML-based attacks, maybe even viruses, hitting someone publicly -- and that’s what it’ll take to galvanize the industry.”
ReplyDeleteFundamentally, we are seeing a huge shift appearing among large enterprises that rely on IT for a large portion of their business toward genuinely embracing distributed integration approaches: employing both rich UIs and services, atop messaging frameworks and platforms, and leveraging third party SaaS and Cloud-based services to do a lot of the back-end work. All this has increased the flexibility and agility of software development, but it has at the same time drastically increased the volatility and risk of costly failures in production. Companies that do not address these IT risks with methods and tools up to the task of today's complex architectures will fail in front of customers.
With emphasis on performing the following tasks:
1. Virtualization: IT resources can be shared between many computing resources (physical servers or application servers).
1. Provide more efficient utilization of IT resources and reduce hardware cost through resource consolidations and economies of scale. Lowering total cost of ownership and improving asset utilization.
2. Provisioning: IT resources are rapidly provisioned (or de-provisioned) based on consumer demands.
1. Reduce IT cycle time and management cost.
3. Elastic scaling: IT environments scale up and down by any magnitudes as needed to satisfy customer demands.
1. Optimize IT resource utilization and increase flexibility.
4. Service Automation Management: IT environments that provide the capability to request, deliver, and manage IT services automatically.
1. Reduce IT operational costs by automating the processes used to deliver and manage a cloud computing environment.
5. Pervasiveness: Services are delivered through the use of the Internet and on any platform.
1. Improve customers' experience by enabling services to be accessed from anywhere, at anytime, and on any device.
6. Flexible pricing: Services are tracked with usage metrics to enable multiple payment models.
1. Improve cost transparency and offer more flexible pricing schemes.
This comment has been removed by the author.
ReplyDeleteRespected sir
ReplyDelete1. This algorithm can be easily implemented.
2. Data can be transferred using optimal path.
3. Performance is high and cost effective.
4. Security is high.
by
K.Rakesh Jayavendhan
Respected sir,
ReplyDelete1.This algorithm can be easily detect the failure link immediately when noticed.
2.And it can be easily implmented.
3.Monitor the sensor Nodes and Networks in an optimal way.
4.And cost effective.
by,
B.Surendiran.
Respected Sir,
ReplyDeleteThe website information (i.e. client information) that we submit in the websites (i.e. Search engine database websites) are not secret information. We just provide this information to the search engine databases for getting more users to our client website (i.e. Search Engine optimization).
The privacy policies of search engine websites are not a major issue for providing our client information. Hence there is no need to check the privacy policies of websites.
The service of the search engine database websites is only providing the information to search engine when internet user asking about our client information (services that provided by our clients)
Thanks & Regards,
M.Selvaanitha
Respected sir
ReplyDeleteI choose Eclipse because,
1. Eclipse is very good for teaching OO Java Programming
2. Very extensible
3. Eclipse is free and open source software and very well supported by a worldwide community.
4. Development effort can be reduced by using the existing platform and available components.
5. Eclipse can serve as an integration level: all applications developed on Eclipse easily fit with other Eclipse-based applications.
by
S.Madhu Bala
Respected sir,
ReplyDeleteDistance Vector is best because
If all routers were running a Distance Vector protocol, the path or 'route' chosen would be from A B directly over the ISDN serial link, even though that link is about 10 times slower than the indirect route from A C D B.
A Link State protocol would choose the A C D B path because it's using a faster medium (100 Mb ethernet). In this example, it would be better to run a Link State routing protocol, but if all the links in the network are the same speed, then a Distance Vector protocol is better.
By
Arun Kumar P.V
Respected sir,
ReplyDeleteWireless sensor networks (WSNs), which normally consist of hundreds or thousands of sensor nodes each capable of sensing, processing, and transmitting environmental information, is deployed to monitor certain physical phenomena or to detect and track certain targets in an area of interests. The issue of tracking moving targets in WSNs has received significant attention in recent years. As any other algorithm designed for sensor networks, a tracking algorithm should be:
(1) Self-organizing, i.e. it should not depend on global infrastructure;
(2) Energy efficient, i.e. it should require little computation and, especially, communication;
(3) Robust, i.e. it should not depend on noise and movement of the target;
(4) Accurate, i.e. it should work with accuracy and precision in various environments, and should not depend on sensor-to-sensor connectivity in the network;
(5) Reliable, i.e. it should be tolerant to node failures.
Tracking a target in sensor networks is mainly challenging regardless of the energy consumption due to resource-constrained features of such a network. The minimization of energy consumption for a sensor network with target activities is complicated since target estimation involves collaborative sensing and communication between different nodes. The problem of selecting the best nodes for tracking a target in a distributed wireless sensor network was investigated since.
The main idea is for a network to determine participants in sensor collaboration by dynamically optimizing the utility function of data for a given cost of communication and computation.
Previous research has focused on information-theoretic node selection approaches, that is, on heuristics to select an informative sensor such that the fusion of the selected sensor observation with the prior target location distribution would yield on average the greatest reduction in the entropy of the target location distribution. In, the sensor node which will result in the smallest expected posterior uncertainty of the target state is chosen as the next node to contribute to the movement decision. Specially, minimizing the expected posterior uncertainty is equivalent to maximizing the mutual information between the sensor node output and the target state. In, an entropy-based sensor selection heuristic is proposed for target localization in which a sensor node is chosen at each step and the observation of that node is incorporated into the target location distribution using sequential Bayesian filtering. Instead, the main idea underlying our approach is that the heuristics select an informative sensor such that the fusion of the selected sensor observation with the prior target location distribution would yield to minimize the overall energy consumption in a cluster while maximizing the mutual information between the sensor node observation and the target state in order to improve the quality of the tracking data. We show that properly selected nodes to collect measurements in a cluster head, we can save energy to maximize the sensor network lifetime, and we will compare our node selection algorithm .
By
Manimuthu S.M.
Respected sir,
ReplyDeleteI choose this algorithm because the existing CONGESTION LOAD MINIMIZATION algorithm minimizes the load of the congested AP, but they do not necessarily balance the load of the noncongested APs. So we consider Min-Max Load Balancing approach that not only minimizes the network congestion load but also balances the load of the noncongested APs.
The existing load balancing schemes achieved considerable improvement in terms of throughput and fairness; they require certain support from the client side. In contrast, the proposed scheme does not require any proprietary client support.
By
Nagarathinakumar .B
Respected sir,
ReplyDeleteINCREASINGLY, several applications require the acquisition of data from the physical world in a reliable and automatic manner. This necessity implies the emergence of new kinds of networks, which are typically composed of low-capacity devices. Such devices, called sensors, make it possible to capture and measure specific elements from the physical world (e.g., temperature, pressure, humidity).
Moreover, they run on small batteries with low energetic capacities. Consequently, their power consumption must be optimized in order to ensure increased lifetime for those devices. During data collection, two mechanisms are used to reduce energy consumption: message aggregation and filtering of redundant data. These mechanisms generally use clustering methods in order to coordinate aggregation and filtering. Clustering methods belong to either one of two categories: distributed and centralized. The centralized approach assumes that the existence of a particular node is cognizant of the information pertaining to the other network nodes. Then, the problem is modeled as a graph partitioning problem with particular constraints that render this problem NP-hard. The central node determines clusters by solving this partitioning problem. However, the major drawbacks of this category are linked to additional costs engendered by communicating the network node information and the time required to solve an optimization problem. In the second category, the distributed method, each node executes a distributed clustering algorithm. The major drawback of this category is that nodes have limited knowledge pertaining to their neighborhood. Hence, clusters are not built in an optimal manner.
In order to facilitate the usage of tabu search for CBP, a new graph called Gr is defined. It is capable of determining feasible clusters. A feasible cluster consists of a set of nodes that fulfill the cluster building constraints. Nodes that satisfy Constraint (10), i.e., ensure zone coverage, are called active nodes. The vertices of Gr represent the network nodes. An edge (i,j) is defined in graph Gr between nodes i and j if they satisfy Constraints. Consequently, it is clear that a clique in Gr embodies a feasible cluster. A clique consists of a set of nodes that are adjacent to one another.
By
Karthick .B
Respected Sir
ReplyDeleteTo search for a node with known property is a basic recurrent problem arising in many distributed applications.
Biased random walks are random walks in which nodes have statistical preference to forward the walker toward the target.
The advantage of a biased random walk is that it reduces the excepted number of steps before the target is reached, called the hitting time, significantly.
The effect of bias on the hitting time when the random walk is executed over a wireless network.
Compared to flooding algorithm, a random walk search has a more fine-grained control of the search space, a higher adaptive to termination conditions, and can naturally cope with failures or voluntary disconnections of nodes
by
M.Karthikeyan