Title:Enabling cloud storage auditing with veritable outsourcing of key updates
Abstract: In cloud server,when a client wants to upload the file into the cloud then than should be encrypted using security key before they are uploaded.when security keys of all the files are maintained in client side,there is no security and computation overhead problems may be occurred.when security keys of all the files are maintained in cloud server,they may be hacked easily.here,TPA creates an encrypted secret key and sends it to the appropriate client to encrypt and upload the data into cloud.the client downloads the encrypted secret key from the authorized party and decrypts it only when he would like to upload new files to cloud.the uploaded file will be encrypted and uploaded into the cloud based on this key.every uploaded file must be audited by TPA.Then only it will be stored in the cloud.since keys and files are encrypted in the cloud,hackers could not hack the files and corresponding keys easily.
Software requirements:
*operating system:windows7 *coding language:ASP.net *Data base:SQL server
Route Stability in MANETs under the Random Direction Mobility Model
Abstract:
A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce the latency and the overhead due to route reconstruction. In this work, we study both the availability and the duration probability of a routing path that is subject to link failures caused by node mobility. In particular, we focus on the case where the network nodes move according to the Random Direction model, and we derive both exact and approximate (but simple) expressions of these probabilities. Through our results, we study the problem of selecting an optimal route in terms of path availability. Finally, we propose an approach to improve the efficiency of reactive Routing protocols.
Algorithm / Technique used:
Random Direction model.
Algorithm Description:
The Random waypoint model is a random-based mobility model used in mobility management schemes for mobile communication systems. The mobility model is designed to describe the movement pattern of mobile users, and how their location, velocity and acceleration change over time. Mobility models are used for simulation purposes when new network protocols are evaluated.
In random-based mobility simulation models, the mobile nodes move randomly and freely without restrictions. To be more specific, the destination, speed and direction are all chosen randomly and independently of other nodes.
Existing System:
The problem of link and route stability has been widely addressed in the literature. Routing protocols accounting for route stability while selecting the source-destination path can be found , just to name a few. In particular, the work in considers nodes moving along nonrandom patterns and exploits some knowledge of the nodes motion to predict the path duration. Studies on link and path availability and duration are presented. A partially deterministic and a Brownian motion, where nodes start moving from the same location, are considered.
Proposed System:
We focus on the stability of a routing path, which is subject to link failures caused by node mobility. We define the path duration as the time interval from when the route is established until one of the links along the route becomes unavailable, while we say that a path is available at a given time instant t when all links along the path are active at time t. Then, our objective is to derive the probability of path duration till time t and the probability of path availability at time t.
Hardware Requirements:
• System : Pentium IV 2.4 GHz. • Hard Disk : 40 GB. • Floppy Drive : 1.44 Mb. • Monitor : 15 Vga Colour. • Mouse : Logitech. • Ram : 256 MB.
Software Requirements:
• Operating system : - Windows XP Professional. • Front End : - ASP.Net 2.0. • Coding Language :- Visual C# .Net
Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation ABSTRACT With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure share data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straight forward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this project, we propose a novel public auditing mechanism For the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud tore-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the Cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation. Software Requirements:
Operating system : Windows XP/07. Coding Language : Java Data Base : My Sql
Title: SCALABLE AND SECURE SHARING OF PERSONAL HEALTH RECORDS IN CLOUD COMPUTING USING ATTRIBUTE-BASED ENCRYPTION
ABSTRACT : Personal health record(PHR) is an emerging patient centric model of health information exchange,which is often outsourced to be stored at a third party such as cloud providers. However,there have been wide privacy concerns as personal health information could be exposed to third party servers and to unauthorized parties.To assure the patients control over access to their own PHRs,it is a promising method to encrypt the PHRs before outsourcing. Yet issues such as risks of privacy exposure,scalability in key management, flexible access and efficient user revocation, have remained the most important challenges toward achieving fine-granted cryptographically enforced data access control.
INFORMATION CONTENT BASED SELECTION OVER DATA MINING ANALYSIS ABSTRACT Information Content is developed using PHP, CSS, and JavaScript. Talking about the project, it contains an admin side and a user where a user can view the available post and download it easily. Additional, the admin plays an important role in the management of this system. In this project, the user has to perform all the main functions from the admin side. Talking about the features of the information Content, the user can view all the available posts and simply download it. All the post is displayed in the homepage section. The user can simply click on it and view all the posts. It also shows the published date with author name.
SYSTEM ANALYSIS EXISTING SYSTEM Less security Not providing consistency Not flexibility
PROPOSED SYSTEM To easily obtain, upload, and collaborate on data files within a company. Communicate through an integrated messaging system. Present and manage content for publication on the internet. greater consistency improved site navigation increased site flexibility support for decentralized authoring increased security reduced site maintenance costs
HARDWARE AND SOFTWARE SPECIFICATIONS Hardware Requirements Processor: Pentium 4 or higher RAM: 512 MB or more Memory Space 80 GB or higher.
Software Requirements PHP version 5.4.3 My SQL Database 5.5.24 Apache Web Server 2.2.22
MODULES DESCRIPTION Type Modules Admin Panel Manage available Content Insert Content Download contents ADMIN PANEL From the admin panel, the admin has full access to the system. He/she can manage all the contents of the site. In order to add a post, the admin has to enter a suitable post title, post author, keywords, image file and post content. Besides, the admin can easily manage all the featured post. MANAGE AVAILABLE CONTENT The user can simply view all the topics and view it’s full story/details. He/she can also leave a comment on that particular content. the admin can easily add posts by entering the post Content and description. This simple project contains only adding posts to a website. INSERT CONTENT Once the final content is in the repository, it can then be published out to either the website or intranet. Content management systems boast powerful publishing engines which allow the appearance and page layout of the site to be applied automatically during publishing. It may also allow the same content to be published to multiple sites. DOWNLOAD It is based on user can be download the information of content and to be stored.in this processor to use access method only user
5. SYSTEM DESIGN Data Flow Diagram A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modelling its process aspects, DFDs can also be used for the visualization of data processing. A DFD shows what kinds of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of processes, or information about whether processes will operate in sequence or in parallel. Symbols used in DFD are as follows:
INPUT/ OUTPUT
This symbol is used to show the input to the system or process and to show the output from the system or process.
DATA PROCESS
This symbol is used to show the process which held in the system to generate information from the raw input.
FILE/ DATABASE
This symbol is used to show the database storage of the system. It is common practice to draw the context-level data flow diagram first, which shows the interaction between the system and external agents which act as data sources and data sinks. On the context diagram the system's interactions with the outside world are modeled purely in terms of data flows across the system boundary.
Reg.No: B7S17467 Name : S.Balamurugan Class: BCA (FN)
Tittle: CATCH YOU IF YOU MISBEHAVE: RANKED KEYWORD SEARCH RESULTS VERIFICATION IN CLOUD COMPUTING
ABSTRACT: With the advent of cloud computing, more and more people tend to outsource their data to the cloud. As a fundamental datautilization, secure keyword search over encrypted cloud data has attracted the interest of many researchers recently. However, most of existing researches are based on an ideal assumption that the cloud server is “curious but honest”, where the search results are notverified. In this paper, we consider a more challenging model, where the cloud server would probably behave dishonestly. Based on this model, we explore the problem of result verification for the secure ranked keyword search. Different from previous data verification schemes, we propose a novel deterrent-based scheme. With our carefully devised verification data, the cloud server cannot know which data owners, or how many data owners exchange anchor data which will be used for verifying the cloud server’s misbehavior. With our systematically designed verification construction, the cloud server cannot know which data owners’ data are embedded in the verification data buffer, or how many data owners’ verification data are actually used for verification. All the cloud server knows is that,once he behaves dishonestly, he would be discovered with a high probability, and punished seriously once discovered. Furthermore,we propose to optimize the value of parameters used in the construction of the secret verification data buffer. Finally, with thorough analysis and extensive experiments, we confirm the efficacy and efficiency of our proposed schemes.
REG NUMBER: B8T20662 NAME : MADHANKUMAR A CLASS : M.SC(IT) FINAL YEAR TITLE : AN RELIABLE COST OPTIMIZATION IN GREEN CLOUD
ABSTRACT Functioning as an intermediary between users and cloud providers can bring about great benefits to the cloud market. Using several pricing schemes, cloud source provider affix the cost for utilizing his storage for each file. So the cloud providers optimize the cost for every file and maintain in the reliable cloud storage. However, as energy costs of cloud computing have been increasing rapidly, there is a need for cloud providers to optimize energy efficiency while maintain high service level performance to tenants, not only for their own benefit but also for social welfares (e.g., protecting environment.). so it leads the cloud environment very effective and better cost optimization.
EXISTING TECHNIQUE Though cloud computing is still in its relative infancy, it has earned rapid interest and adoption due to its advantages. Cloud tenants (e.g., DropBox) purchase cloud computing services from cloud providers (e.g., Amazon, Microsoft Azure. Their no pricing scheme is there to ensure the optimization of cost for files in the cloud storage. So it is very difficult to implement and manage cloud technologies to handle this energy consumption problem and complex issues for cloud resource utilizers. DISADVANTAGES Very high pricing schemes High energy consumption PROPOSED TECHNIQUE Our objective in this paper is to minimize the energy consumption of cloud servers. However, in reality, CSBs are always profit-driven and they do not need to minimize the energy cost from cloud providers.We propose a novel mechanism to ensure the effective energy consumption and provide a better optimization technique for cost of the files stored in a cloud environment. ADVANTAGES Effective energy consumption Better cost optimization
MODULES
User Registration Authentication & Login Cloud Storage Cost Optimization USER REGISTRATION Before going to acquire the service, User have to register their corresponding personal details such as user name, Email id, contact no. And the corresponding details are processed and stored in the server database. Those details are checked whenever the user has authenticate themselves. AUTHENTICATION & LOGIN After complete the basic registration process, the individual account is created and access the service for the each user. Through the account only then the user have to use the service. User, admin, are have individual login for themselves. CLOUD STORAGE Cloud storage is a model of data storage where the digital data is stored in logical pools, the physical storage spans multiple servers (and often locations), and the physical environment is typically owned and managed by a hosting company. And file is stored only in an encrypted format for providing security COST OPTIMIZATION After complete the process of file uploads, cloud resource provider ensure the cost of each file stores in the cloud environment. Using efficient Pricing Scheme, cost of the file is optimized for each day and for maintenance of files. And the cost of each user is display on their corresponding home page.
No:B7S18381
ReplyDeleteRagul prasad. G
Bsc(cs) aft
No:B7s18263
ReplyDeleteName:M.Gopigurunathan
Class:B.C.A(A.N)
Title:Enabling cloud storage auditing with veritable outsourcing of key updates
DeleteAbstract:
In cloud server,when a client wants to upload the file into the cloud then than should be encrypted using security key before they are uploaded.when security keys of all the files are maintained in client side,there is no security and computation overhead problems may be occurred.when security keys of all the files are maintained in cloud server,they may be hacked easily.here,TPA creates an encrypted secret key and sends it to the appropriate client to encrypt and upload the data into cloud.the client downloads the encrypted secret key from the authorized party and decrypts it only when he would like to upload new files to cloud.the uploaded file will be encrypted and uploaded into the cloud based on this key.every uploaded file must be audited by TPA.Then only it will be stored in the cloud.since keys and files are encrypted in the cloud,hackers could not hack the files and corresponding keys easily.
Software requirements:
*operating system:windows7
*coding language:ASP.net
*Data base:SQL server
Reg No:B7S17575
ReplyDeleteName: p. Manikandan
Class: Bsc computer science (F.N)
Route Stability in MANETs under the Random Direction Mobility Model
DeleteAbstract:
A fundamental issue arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes. A method that has been advocated to improve routing efficiency is to select the most stable path so as to reduce the latency and the overhead due to route reconstruction. In this work, we study both the availability and the duration probability of a routing path that is subject to link failures caused by node mobility. In particular, we focus on the case where the network nodes move according to the Random Direction model, and we derive both exact and approximate (but simple) expressions of these probabilities. Through our results, we study the problem of selecting an optimal route in terms of path availability. Finally, we propose an approach to improve the efficiency of reactive Routing protocols.
Algorithm / Technique used:
Random Direction model.
Algorithm Description:
The Random waypoint model is a random-based mobility model used in mobility management schemes for mobile communication systems. The mobility model is designed to describe the movement pattern of mobile users, and how their location, velocity and acceleration change over time. Mobility models are used for simulation purposes when new network protocols are evaluated.
In random-based mobility simulation models, the mobile nodes move randomly and freely without restrictions. To be more specific, the destination, speed and direction are all chosen randomly and independently of other nodes.
Existing System:
The problem of link and route stability has been widely addressed in the literature. Routing protocols accounting for route stability while selecting the source-destination path can be found , just to name a few. In particular, the work in considers nodes moving along nonrandom patterns and exploits some knowledge of the nodes motion to predict the path duration. Studies on link and path availability and duration are presented. A partially deterministic and a Brownian motion, where nodes start moving from the same location, are considered.
Proposed System:
We focus on the stability of a routing path, which is subject to link failures caused by node mobility. We define the path duration as the time interval from when the route is established until one of the links along the route becomes unavailable, while we say that a path is available at a given time instant t when all links along the path are active at time t. Then, our objective is to derive the probability of path duration till time t and the probability of path availability at time t.
Hardware Requirements:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 Vga Colour.
• Mouse : Logitech.
• Ram : 256 MB.
Software Requirements:
• Operating system : - Windows XP Professional.
• Front End : - ASP.Net 2.0.
• Coding Language :- Visual C# .Net
Reg No:B7S17581
ReplyDeleteName: S. Nivas Raj
Class:Bsc Computer science(F.N)
Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation
DeleteABSTRACT
With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure share data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straight forward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this project, we propose a novel public auditing mechanism
For the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud tore-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the
Cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation.
Software Requirements:
Operating system : Windows XP/07.
Coding Language : Java
Data Base : My Sql
Reg No:B7S18361
ReplyDeleteName:R.Ajith Kumar
Class:B.Sc Computer Science (A.N)
Reg.no: B7S17486
ReplyDeleteName: M.Praveen
Class:BCA (FN)
Title: SCALABLE AND SECURE SHARING OF PERSONAL HEALTH RECORDS IN CLOUD COMPUTING USING ATTRIBUTE-BASED ENCRYPTION
DeleteABSTRACT : Personal health record(PHR) is an emerging patient centric model of health information exchange,which is often outsourced to be stored at a third party such as cloud providers. However,there have been wide privacy concerns as personal health information could be exposed to third party servers and to unauthorized parties.To assure the patients control over access to their own PHRs,it is a promising method to encrypt the PHRs before outsourcing. Yet issues such as risks of privacy exposure,scalability in key management, flexible access and efficient user revocation, have remained the most important challenges toward achieving fine-granted cryptographically enforced data access control.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
DeleteINFORMATION CONTENT BASED SELECTION OVER DATA MINING ANALYSIS
DeleteABSTRACT
Information Content is developed using PHP, CSS, and JavaScript. Talking about the project, it contains an admin side and a user where a user can view the available post and download it easily. Additional, the admin plays an important role in the management of this system. In this project, the user has to perform all the main functions from the admin side. Talking about the features of the information Content, the user can view all the available posts and simply download it. All the post is displayed in the homepage section. The user can simply click on it and view all the posts. It also shows the published date with author name.
SYSTEM ANALYSIS
EXISTING SYSTEM
Less security
Not providing consistency
Not flexibility
PROPOSED SYSTEM
To easily obtain, upload, and collaborate on data files within a company.
Communicate through an integrated messaging system.
Present and manage content for publication on the internet.
greater consistency
improved site navigation
increased site flexibility
support for decentralized authoring
increased security
reduced site maintenance costs
HARDWARE AND SOFTWARE SPECIFICATIONS
Hardware Requirements
Processor: Pentium 4 or higher
RAM: 512 MB or more
Memory Space 80 GB or higher.
Software Requirements
PHP version 5.4.3
My SQL Database 5.5.24
Apache Web Server 2.2.22
MODULES DESCRIPTION
Type Modules
Admin Panel
Manage available Content
Insert Content
Download contents
ADMIN PANEL
From the admin panel, the admin has full access to the system. He/she can manage all the contents of the site. In order to add a post, the admin has to enter a suitable post title, post author, keywords, image file and post content. Besides, the admin can easily manage all the featured post.
MANAGE AVAILABLE CONTENT
The user can simply view all the topics and view it’s full story/details. He/she can also leave a comment on that particular content. the admin can easily add posts by entering the post Content and description. This simple project contains only adding posts to a website.
INSERT CONTENT
Once the final content is in the repository, it can then be published out to either the website or intranet. Content management systems boast powerful publishing engines which allow the appearance and page layout of the site to be applied automatically during publishing. It may also allow the same content to be published to multiple sites.
DOWNLOAD
It is based on user can be download the information of content and to be stored.in this processor to use access method only user
5. SYSTEM DESIGN
Data Flow Diagram
A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modelling its process aspects, DFDs can also be used for the visualization of data processing.
A DFD shows what kinds of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of processes, or information about whether processes will operate in sequence or in parallel.
Symbols used in DFD are as follows:
INPUT/ OUTPUT
This symbol is used to show the input to the system or process and to show the output from the system or process.
DATA PROCESS
This symbol is used to show the process which held in the system to generate information from the raw input.
FILE/ DATABASE
This symbol is used to show the database storage of the system. It is common practice to draw the context-level data flow diagram first, which shows the interaction between the system and external agents which act as data sources and data sinks. On the context diagram the system's interactions with the outside world are modeled purely in terms of data flows across the system boundary.
User DFD
DFD 2
ADMIN Level
Reg.No: B7S17467
ReplyDeleteName : S.Balamurugan
Class: BCA (FN)
Tittle: CATCH YOU IF YOU MISBEHAVE: RANKED KEYWORD SEARCH RESULTS
VERIFICATION IN CLOUD COMPUTING
ABSTRACT:
With the advent of cloud computing, more and more people tend to outsource their
data to the cloud. As a fundamental datautilization, secure keyword search over
encrypted cloud data has attracted the interest of many researchers recently. However,
most of existing researches are based on an ideal assumption that the cloud server is
“curious but honest”, where the search results are notverified. In this paper, we
consider a more challenging model, where the cloud server would probably behave
dishonestly. Based on this model, we explore the problem of result verification for the
secure ranked keyword search. Different from previous data verification schemes, we
propose a novel deterrent-based scheme. With our carefully devised verification data,
the cloud server cannot know which data owners, or how many data owners exchange
anchor data which will be used for verifying the cloud server’s misbehavior. With our
systematically designed verification construction, the cloud server cannot know which
data owners’ data are embedded in the verification data buffer, or how many data
owners’ verification data are actually used for verification. All the cloud server knows is
that,once he behaves dishonestly, he would be discovered with a high probability, and
punished seriously once discovered. Furthermore,we propose to optimize the value of
parameters used in the construction of the secret verification data buffer. Finally, with
thorough analysis and extensive experiments, we confirm the efficacy and efficiency of
our proposed schemes.
Sir
ReplyDeleteREG NUMBER: B8T20662
ReplyDeleteNAME : MADHANKUMAR A
CLASS : M.SC(IT) FINAL YEAR
TITLE :
AN RELIABLE COST OPTIMIZATION IN GREEN CLOUD
ABSTRACT
Functioning as an intermediary between users and cloud providers can bring about great benefits to the cloud market. Using several pricing schemes, cloud source provider affix the cost for utilizing his storage for each file. So the cloud providers optimize the cost for every file and maintain in the reliable cloud storage. However, as energy costs of cloud computing have been increasing rapidly, there is a need for cloud providers to optimize energy efficiency while maintain high service level performance to tenants, not only for their own benefit but also for social welfares (e.g., protecting environment.). so it leads the cloud environment very effective and better cost optimization.
EXISTING TECHNIQUE
Though cloud computing is still in its relative infancy, it has earned rapid interest and adoption due to its advantages. Cloud tenants (e.g., DropBox) purchase cloud computing services from cloud providers (e.g., Amazon, Microsoft Azure. Their no pricing scheme is there to ensure the optimization of cost for files in the cloud storage. So it is very difficult to implement and manage cloud technologies to handle this energy consumption problem and complex issues for cloud resource utilizers.
DISADVANTAGES
Very high pricing schemes
High energy consumption
PROPOSED TECHNIQUE
Our objective in this paper is to minimize the energy consumption of cloud servers. However, in reality, CSBs are always profit-driven and they do not need to minimize the energy cost from cloud providers.We propose a novel mechanism to ensure the effective energy consumption and provide a better optimization technique for cost of the files stored in a cloud environment.
ADVANTAGES
Effective energy consumption
Better cost optimization
MODULES
User Registration
Authentication & Login
Cloud Storage
Cost Optimization
USER REGISTRATION
Before going to acquire the service, User have to register their corresponding personal details such as user name, Email id, contact no. And the corresponding details are processed and stored in the server database. Those details are checked whenever the user has authenticate themselves.
AUTHENTICATION & LOGIN
After complete the basic registration process, the individual account is created and access the service for the each user. Through the account only then the user have to use the service. User, admin, are have individual login for themselves.
CLOUD STORAGE
Cloud storage is a model of data storage where the digital data is stored in logical pools, the physical storage spans multiple servers (and often locations), and the physical environment is typically owned and managed by a hosting company. And file is stored only in an encrypted format for providing security
COST OPTIMIZATION
After complete the process of file uploads, cloud resource provider ensure the cost of each file stores in the cloud environment. Using efficient Pricing Scheme, cost of the file is optimized for each day and for maintenance of files. And the cost of each user is display on their corresponding home page.