Showing posts with label IEEE DotNET Abstracts. Show all posts
Showing posts with label IEEE DotNET Abstracts. Show all posts

Saturday, July 6, 2013

DotNET Project Titles, DotNET Project Abstracts, DotNET IEEE Project Abstracts, DotNET Projects abstracts CSE IT MCA, DotNET Titles, Download DotNET Project Abstracts, Download IEEE DotNET Abstracts

DOTNET PROJECTS - ABSTRACTS

A Distributed Control Law for Load Balancing in Content Delivery Networks
Abstract : 
In this paper, we face the challenging issue of defining and implementing an effective law for load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal study of a CDN system, carried out through the exploitation of a fluid flow model characterization of the network of servers. Starting from such characterization, we derive and prove a lemma about the network queues equilibrium. 
This result is then leveraged in order to devise a novel distributed and time-continuous algorithm for load balancing, which is also reformulated in a time-discrete version. The discrete formulation of the proposed balancing law is eventually discussed in terms of its actual implementation in a real-world scenario. Finally, the overall approach is validated by means of simulation
  

A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data   
"Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. 
In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent; the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. 
To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probabilitybased Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. 
The results, on 35 publicly available real-world high-dimensional image, microarray, and text data, demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers. " 


A Generalized Flow based Method for Analysis of Implicit Relationships on Wikipedia 
ABSTRACT: 
We focus on measuring relationships between pairs of objects in Wikipedia whose pages can be regarded as individual objects. Two kinds of relationships between two objects exist: in Wikipedia, an explicit relationship is represented by a single link between the two pages for the objects, and an implicit relationship is represented by a link structure containing the two pages. 
Some of the previously proposed methods for measuring relationships are cohesion-based methods, which underestimate objects having high degrees, although such objects could be important in constituting relationships in Wikipedia. The other methods are inadequate for measuring implicit relationships because they use only one or two of the following three important factors: distance, connectivity, and cocitation. 
We propose a new method using a generalized maximum flow which reflects all the three factors and does not underestimate objects having high degree. We confirm through experiments that our method can measure the strength of a relationship more appropriately than these previously proposed methods do. Another remarkable aspect of our method is mining elucidatory objects, that is, objects constituting a relationship. We explain that mining elucidatory objects would open a novel way to deeply understand a relationship.


Graph-Based Consensus Maximization Approach for Combining Multiple Supervised and Unsupervised Models
Abstract :
Ensemble learning has emerged as a powerful method for combining multiple models. Well-known methods, such as bagging, boosting, and model averaging, have been shown to improve accuracy and robustness over single models. However, due to the high costs of manual labeling, it is hard to obtain sufficient and reliable labeled data for effective training. Meanwhile, lots of unlabeled data exist in these sources, and we can readily obtain multiple unsupervised models. 
Although unsupervised models do not directly generate a class label prediction for each object, they provide useful constraints on the joint predictions for a set of related objects. Therefore, incorporating these unsupervised models into the ensemble of supervised models can lead to better prediction performance. 
In this paper, we study ensemble learning with outputs from multiple supervised and unsupervised models, a topic where little work has been done. We propose to consolidate a classification solution by maximizing the consensus among both supervised predictions and unsupervised constraints. We cast this ensemble task as an optimization problem on a bipartite graph, where the objective function favors the smoothness of the predictions over the graph, but penalizes the deviations from the initial labeling provided by the supervised models. 
We solve this problem through iterative propagation of probability estimates among neighboring nodes and prove the optimality of the solution. The proposed method can be interpreted as conducting a constrained embedding in a transformed space, or a ranking on the graph. Experimental results on different applications with heterogeneous data sources demonstrate the benefits of the proposed method over existing alternatives. 


A Highly Practical Approach toward Achieving Minimum Data Sets Storage Cost in the Cloud
Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. 
However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. 
The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.


A Log-based Approach to Make Digital Forensics Easier on Cloud Computing
Abstract
Cloud computing is getting more and more attention from the information and communication technologies industry recently. Almost all the leading companies of the information area show their interesting and efforts on cloud computing and release services about cloud computing in succession. But if want to make it go further, we should pay more effort on security issues. 
Especially, the Internet environment now has become more and more unsecure. With the popularization of computers and intelligent devices, the number of crime on them has increased rapidly in last decades, and will be quicker on the cloud computing environment in future. 
No wall is wall in the world. We should enhance the cloud computing not only at the aspect of precaution, but also at the aspect of dealing with the security events to defend it from crime activities. In this paper, I propose a approach which using logs model to building a forensic-friendly system. Using this model we can quickly gather information from cloud computing for some kinds of forensic purpose. And this will decrease the complexity of those kinds of forensics.


A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay
We consider the problem of designing a joint congestion control and scheduling algorithm for multihop wireless networks. The goal is to maximize the total utility and achieve low end-to-end delay utilize close-to-half of the system capacity under the one-hop interference constraint. 
The simultaneously. Assume that there are M flows inside the network, and each flow m has a fixed route with Hm hops. Further, the network operates under the one-hop interference constraint. We develop a new congestion control and scheduling algorithm that combines a window-based flow control algorithm and a new distributed rate-based scheduling algorithm. 
For any ϵ, ϵm ∈ (0, 1), by appropriately choosing the number of backoff mini-slots for the scheduling algorithm and the window-size of flow m, our proposed algorithm can guarantee that each flow m achieves throughput no smaller than rm(1 - ϵ)(1 - ϵm), where the total utility of the rate allocation vector r⃗ = [rm] is no smaller than the total utility of any rate vector within half of the capacity region. Furthermore, the end-to-end delay of flow m can be upper bounded by Hm/(rm(1 - ϵ)ϵm). 
Since a flow-m packet requires at least Hm time slots to reach the destination, the order of the per-flow delay upper bound is optimal with respect to the number of hops. To the best of our knowledge, this is the first fully-distributed joint congestion-control and scheduling algorithm that can guarantee order-optimal per-flow end-to-end delay and throughput and delay bounds are proved by a novel stochastic dominance approach, which could be of independent value and be extended to general interference constraints. Our algorithm can be easily implemented in practice with a low per-node complexity that does not increase with the network size


A Neighbor Coverage-Based Probabilistic Rebroadcast for Reducing Routing Overhead in Mobile Ad Hoc Networks
Due to high mobility of nodes in mobile ad hoc networks (MANETs), there exist frequent link breakages which lead to frequent path failures and route discoveries. The overhead of a route discovery cannot be neglected. In a route discovery, broadcasting is a fundamental and effective data dissemination mechanism, where a mobile node blindly rebroadcasts the first received route request packets unless it has a route to the destination, and thus it causes the broadcast storm problem. 
In this paper, we propose a neighbor coverage-based probabilistic rebroadcast protocol for reducing routing overhead in MANETs. In order to effectively exploit the neighbor coverage knowledge, we propose a novel rebroadcast delay to determine the rebroadcast order, and then we can obtain the more accurate additional coverage ratio by sensing neighbor coverage knowledge. 
We also define a connectivity factor to provide the node density adaptation. By combining the additional coverage ratio and connectivity factor, we set a reasonable rebroadcast probability. Our approach combines the advantages of the neighbor coverage knowledge and the probabilistic mechanism, which can significantly decrease the number of retransmissions so as to reduce the routing overhead, and can also improve the routing performance.


A New Algorithm for Inferring User Search Goals with Feedback Sessions
 For a broad-topic and ambiguous query, different users may have different search goals when they submit it to a search engine. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. In this paper, we propose a novel approach to infer user search goals by analyzing search engine query logs. 
First, we propose a framework to discover different user search goals for a query by clustering the proposed feedback sessions. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of users. Second, we propose a novel approach to generate pseudo-documents to better represent the feedback sessions for clustering. 
Finally, we propose a new criterion “Classified Average Precision (CAP)” to evaluate the performance of inferring user search goals. Experimental results are presented using user click-through logs from a commercial search engine to validate the effectiveness of our proposed methods.


A Privacy Leakage Upper Bound Constraint-Based Approach for Cost-Effective Privacy Preserving of Intermediate Data Sets in Cloud
Cloud computing provides massive computation power and storage capacity which enable users to deploy computation and data-intensive applications without infrastructure investment. Along the processing of such applications, a large volume of intermediate data sets will be generated, and often stored to save the cost of recomputing them. 
However, preserving the privacy of intermediate data sets becomes a challenging problem because adversaries may recover privacy-sensitive information by analyzing multiple intermediate data sets. Encrypting ALL data sets in cloud is widely adopted in existing approaches to address this challenge. But we argue that encrypting all intermediate data sets are neither efficient nor cost-effective because it is very time consuming and costly for data-intensive applications to en/decrypt data sets frequently while performing any operation on them. 
In this paper, we propose a novel upper bound privacy leakage constraint-based approach to identify which intermediate data sets need to be encrypted and which do not, so that privacy-preserving cost can be saved while the privacy requirements of data holders can still be satisfied. Evaluation results demonstrate that the privacy-preserving cost of intermediate data sets can be significantly reduced with our approach over existing ones where all data sets are encrypted.


A Probabilistic Misbehavior Detection Scheme towards Efficient Trust Establishment in Delay-tolerant Networks
Abstract
Malicious and selfish behaviors represent a serious threat against routing in Delay/Disruption Tolerant Networks (DTNs). Due to the unique network characteristics, designing a misbehavior detection scheme in DTN is regarded as a great challenge. 
In this paper, we propose iTrust, a probabilistic misbehavior detection scheme, for secure DTN routing towards efficient trust establishment. The basic idea of iTrust is introducing a periodically available Trusted Authority (TA) to judge the node’s behavior based on the collected routing evidences and probabilistically checking. 
We model iTrust as the Inspection Game and use game theoretical analysis to demonstrate that, by setting an appropriate investigation probability, TA could ensure the security of DTN routing at a reduced cost. To further improve the efficiency of the proposed scheme, we correlate detection probability with a node’s reputation, which allows a dynamic detection probability determined by the trust of the users. 
The extensive analysis and simulation results show that the proposed scheme substantiates the effectiveness and efficiency of the proposed scheme.


A Proxy-Based Approach to Continuous Location-Based Spatial Queries in Mobile Environments
Caching valid regions of spatial queries at mobile clients is effective in reducing the number of queries submitted by mobile clients and query load on the server. However, mobile clients suffer from longer waiting time for the server to compute valid regions. We propose in this paper a proxy-based approach to continuous nearest-neighbor (NN) and window queries. 
The proxy creates estimated valid regions (EVRs) for mobile clients by exploiting spatial and temporal locality of spatial queries. For NN queries, we devise two new algorithms to accelerate EVR growth, leading the proxy to build effective EVRs even when the cache size is small. On the other hand, we propose to represent the EVRs of window queries in the form of vectors, called estimated window vectors (EWVs), to achieve larger estimated valid regions. 
This novel representation and the associated creation algorithm result in more effective EVRs of window queries. In addition, due to the distinct characteristics, we use separate index structures, namely EVR-tree and grid index, for NN queries and window queries, respectively. 
To further increase efficiency, we develop algorithms to exploit the results of NN queries to aid grid index growth, benefiting EWV creation of window queries. Similarly, the grid index is utilized to support NN query answering and EVR updating. We conduct several experiments for performance evaluation. The experimental results show that the proposed approach significantly outperforms the existing proxy-based approaches.


A Rough-Set-Based Incremental Approach for Updating Approximations under Dynamic Maintenance Environments
Approximations of a concept by a variable precision rough-set model (VPRS) usually vary under a dynamic information system environment. It is thus effective to carry out incremental updating approximations by utilizing previous data structures. 
This paper focuses on a new incremental method for updating approximations of VPRS while objects in the information system dynamically alter. It discusses properties of information granulation and approximations under the dynamic environment while objects in the universe evolve over time. 
The variation of an attribute's domain is also considered to perform incremental updating for approximations under VPRS. Finally, an extensive experimental evaluation validates the efficiency of the proposed method for dynamic maintenance of VPRS approximations.


A Scalable Server Architecture for Mobile Presence Services in Social Network Applications
Social network applications are becoming increasingly popular on mobile devices. A mobile presence service is an essential component of a social network application because it maintains each mobile user's presence information, such as the current status (online/offline), GPS location and network address, and also updates the user's online friends with the information continually. If presence updates occur frequently, the enormous number of messages distributed by presence servers may lead to a scalability problem in a large-scale mobile presence service. 
To address the problem, we propose an efficient and scalable server architecture, called PresenceCloud, which enables mobile presence services to support large-scale social network applications. When a mobile user joins a network, PresenceCloud searches for the presence of his/her friends and notifies them of his/her arrival. PresenceCloud organizes presence servers into a quorum-based server-to-server architecture for efficient presence searching. 
It also leverages a directed search algorithm and a one-hop caching strategy to achieve small constant search latency. We analyze the performance of PresenceCloud in terms of the search cost and search satisfaction level. The search cost is defined as the total number of messages generated by the presence server when a user arrives; and search satisfaction level is defined as the time it takes to search for the arriving user's friend list. The results of simulations demonstrate that PresenceCloud achieves performance gains in the search cost without compromising search satisfaction


A Secure Payment Scheme with Low Communication and Processing Overhead for Multihop Wireless Networks
Social network applications are becoming increasingly popular on mobile devices. A mobile presence service is an essential component of a social network application because it maintains each mobile user's presence information, such as the current status (online/offline), GPS location and network address, and also updates the user's online friends with the information continually. 
If presence updates occur frequently, the enormous number of messages distributed by presence servers may lead to a scalability problem in a large-scale mobile presence service. To address the problem, we propose an efficient and scalable server architecture, called PresenceCloud, which enables mobile presence services to support large-scale social network applications. 
When a mobile user joins a network, PresenceCloud searches for the presence of his/her friends and notifies them of his/her arrival. PresenceCloud organizes presence servers into a quorum-based server-to-server architecture for efficient presence searching. It also leverages a directed search algorithm and a one-hop caching strategy to achieve small constant search latency. 
We analyze the performance of PresenceCloud in terms of the search cost and search satisfaction level. The search cost is defined as the total number of messages generated by the presence server when a user arrives; and search satisfaction level is defined as the time it takes to search for the arriving user's friend list. The results of simulations demonstrate that PresenceCloud achieves performance gains in the search cost without compromising search satisfaction


A System to Filter Unwanted Messages from OSN User Walls
One fundamental issue in today's Online Social Networks (OSNs) is to give users the ability to control the messages posted on their own private space to avoid that unwanted content is displayed. Up to now, OSNs provide little support to this requirement. 
To fill the gap, in this paper, we propose a system allowing OSN users to have a direct control on the messages posted on their walls. This is achieved through a flexible rule-based system, that allows users to customize the filtering criteria to be applied to their walls, and a Machine Learning-based soft classifier automatically labeling messages in support of content-based filtering.


Adaptive Network Coding for Broadband Wireless Access Networks
Network Coding for Broadband Wireless Access Networksoadband wireless access (BWA) networks, such as LTE and WiMAX, are inherently lossy due to wireless medium unreliability. Although the Hybrid Automatic Repeat reQuest (HARQ) error-control method recovers from packet loss, it has low transmission efficiency and is unsuitable for delay-sensitive applications. 
Alternatively, network coding techniques improve the throughput of wireless networks, but incur significant overhead and ignore network constraints such as Medium Access Control (MAC) layer transmission opportunities and physical (PHY) layer channel conditions. The present study provides analysis of Random Network Coding (RNC) and Systematic Network Coding (SNC) decoding probabilities. 
Based on the analytical results, SNC is selected for developing an adaptive network coding scheme designated as Frame-by-frame Adaptive Systematic Network Coding (FASNC). According to network constraints per frame, FASNC dynamically utilizes either Modified Systematic Network Coding (M-SNC) or Mixed Generation Coding (MGC). An analytical model is developed for evaluating the mean decoding delay and mean goodput of the proposed FASNC scheme. 
The results derived using this model agree with those obtained from computer simulations. Simulations show that FASNC results in both lower decoding delay and reduced buffer requirements compared to MRNC and N-in-1 ReTX, while also yielding higher goodput than HARQ, MRNC, and N-in-1 ReTX.


Adaptive Position Update for Geographic Routing in Mobile
In geographic routing, nodes need to maintain up-to-date positions of their immediate neighbors for making effective forwarding decisions. Periodic broadcasting of beacon packets that contain the geographic location coordinates of the nodes is a popular method used by most geographic routing protocols to maintain neighbor positions. 
We contend and demonstrate that periodic beaconing regardless of the node mobility and traffic patterns in the network is not attractive from both update cost and routing performance points of view. 
We propose the Adaptive Position Update (APU) strategy for geographic routing, which dynamically adjusts the frequency of position updates based on the mobility dynamics of the nodes and the forwarding patterns in the network. APU is based on two simple principles: 1) nodes whose movements are harder to predict update their positions more frequently (and vice versa), and (ii) nodes closer to forwarding paths update their positions more frequently (and vice versa). 
Our theoretical analysis, which is validated by NS2 simulations of a well-known geographic routing protocol, Greedy Perimeter Stateless Routing Protocol (GPSR), shows that APU can significantly reduce the update cost and improve the routing performance in terms of packet delivery ratio and average end-to-end delay in comparison with periodic beaconing and other recently proposed updating schemes. 
The benefits of APU are further confirmed by undertaking evaluations in realistic network scenarios, which account for localization error, realistic radio propagation, and sparse network


AML Efficient Approximate Membership Localization within a Web-Based Join Framework
In this paper, we propose a new type of Dictionary-based Entity Recognition Problem, named Approximate Membership Localization (AML). The popular Approximate Membership Extraction (AME) provides a full coverage to the true matched substrings from a given document, but many redundancies cause a low efficiency of the AME process and deteriorate the performance of real-world applications using the extracted substrings. 
The AML problem targets at locating nonoverlapped substrings which is a better approximation to the true matched substrings without generating overlapped redundancies. In order to perform AML efficiently, we propose the optimized algorithm P-Prune that prunes a large part of overlapped redundant matched substrings before generating them. Our study using several real-word data sets demonstrates the efficiency of P-Prune over a baseline method. 
We also study the AML in application to a proposed web-based join framework scenario which is a search-based approach joining two tables using dictionary-based entity recognition from web documents. The results not only prove the advantage of AML over AME, but also demonstrate the effectiveness of our search-based approach


An Efficient and Robust Addressing Protocol for Node Auto configuration in Ad Hoc Networks
Address assignment is a key challenge in ad hoc networks due to the lack of infrastructure. Autonomous addressing protocols require a distributed and self-managed mechanism to avoid address collisions in a dynamic network with fading channels, frequent partitions, and joining/leaving nodes. 
We propose and analyze a lightweight protocol that configures mobile ad hoc nodes based on a distributed address database stored in filters that reduces the control load and makes the proposal robust to packet losses and network partitions. 
We evaluate the performance of our protocol, considering joining nodes, partition merging events, and network initialization. Simulation results show that our protocol resolves all the address collisions and also reduces the control traffic when compared to previously proposed protocols.


Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. 
One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. 
In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. 
We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis.


Attribute-Aware Data Aggregation Using Potential-Based Dynamic Routing in Wireless Sensor Networks
The resources especially energy in wireless sensor networks (WSNs) are quite limited. Since sensor nodes are usually much dense, data sampled by sensor nodes have much redundancy, data aggregation becomes an effective method to eliminate redundancy, minimize the number of transmission, and then to save energy. 
Many applications can be deployed in WSNs and various sensors are embedded in nodes, the packets generated by heterogenous sensors or different applications have different attributes. The packets from different applications cannot be aggregated. Otherwise, most data aggregation schemes employ static routing protocols, which cannot dynamically or intentionally forward packets according to network state or packet types. 
The spatial isolation caused by static routing protocol is unfavorable to data aggregation. To make data aggregation more efficient, in this paper, we introduce the concept of packet attribute, defined as the identifier of the data sampled by different kinds of sensors or applications, and then propose an attribute-aware data aggregation (ADA) scheme consisting of a packet-driven timing algorithm and a special dynamic routing protocol. 
Inspired by the concept of potential in physics and pheromone in ant colony, a potential-based dynamic routing is elaborated to support an ADA strategy. The performance evaluation results in series of scenarios verify that the ADA scheme can make the packets with the same attribute spatially convergent as much as possible and therefore improve the efficiency of data aggregation. Furthermore, the ADA scheme also offers other properties, such as scalable with respect to network size and adaptable for tracking mobile events.c


Attribute-based Access to Scalable Media in Cloud-assisted Content Sharing Networks
This paper presents a novel Multi-message Ciphertext Policy Attribute-Based Encryption (MCP-ABE) technique, and employs the MCP-ABE to design an access control scheme for sharing scalable media based on data consumers' attributes (e.g., age, nationality, or gender) rather than an explicit list of the consumers' names. 
The scheme is efficient and flexible because MCP-ABE allows a content provider to specify an access policy and encrypt multiple messages within one ciphertext such that only the users whose attributes satisfy the access policy can decrypt the ciphertext. 
Moreover, the paper shows how to support resource-limited mobile devices by offloading computational intensive operations to cloud servers while without compromising data privacy


CAM: Cloud-Assisted Privacy Preserving Mobile Health Monitoring
Cloud-assisted mobile health (mHealth) monitoring, which applies the prevailing mobile communications and cloud computing technologies to provide feedback decision support, has been considered as a revolutionary approach to improving the quality of healthcare service while lowering the healthcare cost. 
Unfortunately, it also poses a serious risk on both clients' privacy and intellectual property of monitoring service providers, which could deter the wide adoption of mHealth technology. This paper is to address this important problem and design a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data. 
Moreover, the outsourcing decryption technique and a newly proposed key private proxy reencryption are adapted to shift the computational complexity of the involved parties to the cloud without compromising clients' privacy and service providers' intellectual property. Finally, our security and performance analysis demonstrates the effectiveness of our proposed design


Cloud computing for mobile users can offloading compution save energy
The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes


Cooperative Packet Delivery in Hybrid Wireless Mobile Networks: A Coalitional Game Approach
problem of cooperative packet delivery to mobile nodes in a hybrid wireless mobile network, where both infrastructure-based and infrastructure-less (i.e., ad hoc mode or peer-to-peer mode) communications are used. 
We propose a solution based on a coalition formation among mobile nodes to cooperatively deliver packets among these mobile nodes in the same coalition. A coalitional game is developed to analyze the behavior of the rational mobile nodes for cooperative packet delivery. A group of mobile nodes makes a decision to join or to leave a coalition based on their individual payoffs. 
The individual payoff of each mobile node is a function of the average delivery delay for packets transmitted to the mobile node from a base station and the cost incurred by this mobile node for relaying packets to other mobile nodes. To find the payoff of each mobile node, a Markov chain model is formulated and the expected cost and packet delivery delay are obtained when the mobile node is in a coalition. 
Since both the expected cost and packet delivery delay depend on the probability that each mobile node will help other mobile nodes in the same coalition to forward packets to the destination mobile node in the same coalition, a bargaining game is used to find the optimal helping probabilities. 
After the payoff of each mobile node is obtained, we find the solutions of the coalitional game which are the stable coalitions. A distributed algorithm is presented to obtain the stable coalitions and a Markov-chain-based analysis is used to evaluate the stable coalitional structures obtained from the distributed algorithm. 
Performance evaluation results show that when the stable coalitions are formed, the mobile nodes achieve a nonzero payoff (i.e., utility is higher than the cost). With a coalition formation, the mobile nodes achieve higher payoff than that when each mobile node acts alone.


Cross-domain privacy-preserving cooperative firewall optimization
Firewalls have been widely deployed on the Internet for securing private networks. A firewall checks each incoming or outgoing packet to decide whether to accept or discard the packet based on its policy. Optimizing firewall policies is crucial for improving network performance. Prior work on firewall optimization focuses on either intrafirewall or interfirewall optimization within one administrative domain where the privacy of firewall policies is not a concern. 
This paper explores interfirewall optimization across administrative domains for the first time. The key technical challenge is that firewall policies cannot be shared across domains because a firewall policy contains confidential information and even potential security holes, which can be exploited by attackers. 
In this paper, we propose the first cross-domain privacy-preserving cooperative firewall policy optimization protocol. Specifically, for any two adjacent firewalls belonging to two different administrative domains, our protocol can identify in each firewall the rules that can be removed because of the other firewall. 
The optimization process involves cooperative computation between the two firewalls without any party disclosing its policy to the other. We implemented our protocol and conducted extensive experiments.
 The results on real firewall policies show that our protocol can remove as many as 49% of the rules in a firewall, whereas the average is 19.4%. The communication cost is less than a few hundred kilobytes. Our protocol incurs no extra online packet processing overhead, and the offline processing time is less than a few hundred seconds.


Cross-Layer Design of Congestion Control and Power Control in Fast-Fading Wireless Networks
We study the cross-layer design of congestion control and power allocation with outage constraint in an interference-limited multihop wireless networks. Using a complete-convexification method, we first propose a message-passing distributed algorithm that can attain the global optimal source rate and link power allocation. 
Despite the attractiveness of its optimality, this algorithm requires larger message size than that of the conventional scheme, which increases network overheads. Using the bounds on outage probability, we map the outage constraint to an SIR constraint and continue developing a practical near-optimal distributed algorithm requiring only local SIR measurement at link receivers to limit the size of the message. 
Due to the complicated complete-convexification method, however the congestion control of both algorithms no longer preserves the existing TCP stack. To take into account the TCP stack preserving property, we propose the third algorithm using a successive convex approximation method to iteratively transform the original nonconvex problem into approximated convex problems, then the global optimal solution can converge distributively with message-passing. 
Thanks to the tightness of the bounds and successive approximations, numerical results show that the gap between three algorithms is almost indistinguishable. Despite the same type of the complete-convexification method, the numerical comparison shows that the second near-optimal scheme has a faster convergence rate than that of the first optimal one, which make the near-optimal scheme more favorable and applicable in practice. Meanwhile, the third optimal scheme also has a faster convergence rate than that of a previous work using logarithm successive approximation method.


Cross-Layer Metrics for Reliable Routing in Wireless Mesh Networks
Wireless mesh networks (WMNs) have emerged as a flexible and low-cost network infrastructure, where heterogeneous mesh routers managed by different users collaborate to extend network coverage. 
This paper proposes a novel routing metric, Expected Forwarded Counter (EFW), and two further variants, to cope with the problem of selfish behavior (i.e., packet dropping) of mesh routers in a WMN. EFW combines, in a cross-layer fashion, routing-layer observations of forwarding behavior with MAC-layer measurements of wireless link quality to select the most reliable and high-performance path. 
We evaluate the proposed metrics both through simulations and real-life deployments on two different wireless testbeds, performing a comparative analysis with On-Demand Secure Byzantine Resilient Routing (ODSBR) Protocol and Expected Transmission Counter (ETX). 
The results show that our cross-layer metrics accurately capture the path reliability and considerably increase the WMN performance, even when a high percentage of network nodes misbehave


Detection and Localization of Multiple Spoofing Attackers in Wireless Networks
Wireless spoofing attacks are easy to launch and can significantly impact the performance of networks. Although the identity of a node can be verified through cryptographic authentication, conventional security approaches are not always desirable because of their overhead requirements. 
In this paper, we propose to use spatial information, a physical property associated with each node, hard to falsify, and not reliant on cryptography, as the basis for 1) detecting spoofing attacks; 2) determining the number of attackers when multiple adversaries masquerading as the same node identity; and 3) localizing multiple adversaries. We propose to use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to detect the spoofing attacks. 
We then formulate the problem of determining the number of attackers as a multiclass detection problem. Cluster-based mechanisms are developed to determine the number of attackers. When the training data are available, we explore using the Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers. 
In addition, we developed an integrated detection and localization system that can localize the positions of multiple attackers. We evaluated our techniques through two testbeds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real office buildings. Our experimental results show that our proposed methods can achieve over 90 percent Hit Rate and Precision when determining the number of attackers. Our localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries


Distributed Cooperative Caching in Social Wireless Networks 
This paper introduces cooperative caching policies for minimizing electronic content provisioning cost in Social Wireless Networks (SWNET). SWNETs are formed by mobile devices, such as data enabled phones, electronic book readers etc., sharing common interests in electronic content, and physically gathering together in public places. 
Electronic object caching in such SWNETs are shown to be able to reduce the content provisioning cost which depends heavily on the service and pricing dependences among various stakeholders including content providers (CP), network service providers, and End Consumers (EC). 
Drawing motivation from Amazon's Kindle electronic book delivery business, this paper develops practical network, service, and pricing models which are then used for creating two object caching strategies for minimizing content provisioning costs in networks with homogenous and heterogeneous object demands. 
The paper constructs analytical and simulation models for analyzing the proposed caching strategies in the presence of selfish users that deviate from network-wide cost-optimal policies. It also reports results from an Android phone-based prototype SWNET, validating the presented analytical and simulation results


Distributed Processing of Probabilistic Top-k Queries in Wireless Sensor Networks
In this paper, we introduce the notion of sufficient set and necessary set for distributed processing of probabilistic top-k queries in cluster-based wireless sensor networks. These two concepts have very nice properties that can facilitate localized data pruning in clusters. 
Accordingly, we develop a suite of algorithms, namely, sufficient set-based (SSB), necessary set-based (NSB), and boundary-based (BB), for intercluster query processing with bounded rounds of communications. Moreover, in responding to dynamic changes of data distribution in the network, we develop an adaptive algorithm that dynamically switches among the three proposed algorithms to minimize the transmission cost. 
We show the applicability of sufficient set and necessary set to wireless sensor networks with both two-tier hierarchical and tree-structured network topologies. Experimental results show that the proposed algorithms reduce data transmissions significantly and incur only small constant rounds of data communications. 
The experimental results also demonstrate the superiority of the adaptive algorithm, which achieves a near-optimal performance under various conditions.


Dynamic Audit Services for Outsourced Storages in Clouds
In this paper, we propose a dynamic audit service for verifying the integrity of an untrusted and outsourced storage. Our audit service is constructed based on the techniques, fragment structure, random sampling, and index-hash table, supporting provable updates to outsourced data and timely anomaly detection. 
In addition, we propose a method based on probabilistic query and periodic verification for improving the performance of audit services. Our experimental results not only validate the effectiveness of our approaches, but also show our audit system verifies the integrity with lower computation overhead and requiring less extra storage for audit metadata.


Dynamic Coverage of Mobile Sensor Networks
We study the dynamic aspects of the coverage of a mobile sensor network resulting from continuous movement of sensors. As sensors move around, initially uncovered locations may be covered at a later time, and intruders that might never be detected in a stationary sensor network can now be detected by moving sensors. 
However, this improvement in coverage is achieved at the cost that a location is covered only part of the time, alternating between covered and not covered. We characterize area coverage at specific time instants and during time intervals, as well as the time durations that a location is covered and uncovered. 
We further consider the time it takes to detect a randomly located intruder and prove that the detection time is exponentially distributed with parameter 2λrv̅s where λ represents the sensor density, r represents the sensor's sensing range, and v̅s denotes the average sensor speed. 
For mobile intruders, we take a game theoretic approach and derive optimal mobility strategies for both sensors and intruders. We prove that the optimal sensor strategy is to choose their directions uniformly at random between (0, 2π). The optimal intruder strategy is to remain stationary. This solution represents a mixed strategy which is a Nash equilibrium of the zero-sum game between mobile sensors and intruders.


Dynamic Personalized Recommendation on Sparse Data
Recommendation techniques are very important in the fields of E-commerce and other Web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. 
In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally a recommendation is made by adaptively weighting the features. Experimental results on public datasets show that the proposed algorithm has satisfying performance.


Dynamic Query Forms for Database Queries
Modern scientific and web databases maintain large and heterogeneous data. These real-world database schemas contain over hundreds or even thousands of attributes and relations. Traditional predefined query forms are not able to satisfy various ad-hoc queries from users. 
This paper proposes DQF, a novel database query form interface, which is able to dynamically generate query forms. The essence of DQF is to capture the user’s preference and rank query form components. The generation of the query form is an iterative process and is guided by the user. At each iteration, the system automatically generates ranking lists of form components and the user then adds the desired form components into the query form. 
The ranking of form components is based on the captured user preference. The user can also fill the query form and submit queries to view the query result at each iteration. In this way, the query form could be dynamically refined until the user satisfies with the query results. 
We propose a metric for measuring the goodness of a query form. A probabilistic model is developed for estimating the goodness of a query form in DQF. Our experimental evaluation and user study demonstrate the effectiveness and efficiency of the system.


Dynamic Resource Allocation using Virtual Machines for Cloud Computing Environment
Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. 
In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of "skewness” to measure the unevenness in the multidimensional resource utilization of a server. 
By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.


Efficient Algorithms for Neighbor Discovery in Wireless Networks
Neighbor discovery is an important first step in the initialization of a wireless ad hoc network. In this paper, we design and analyze several algorithms for neighbor discovery in wireless networks. Starting with a single-hop wireless network of n nodes, we propose a Θ(nlnn) ALOHA-like neighbor discovery algorithm when nodes cannot detect collisions, and an order-optimal Θ(n) receiver feedback-based algorithm when nodes can detect collisions. 
Our algorithms neither require nodes to have a priori estimates of the number of neighbors nor synchronization between nodes. Our algorithms allow nodes to begin execution at different time instants and to terminate neighbor discovery upon discovering all their neighbors. 
We finally show that receiver feedback can be used to achieve a Θ(n) running time, even when nodes cannot detect collisions. We then analyze neighbor discovery in a general multihop setting. We establish an upper bound of O(Δlnn) on the running time of the ALOHA-like algorithm, where Δ denotes the maximum node degree in the network and n the total number of nodes. 
We also establish a lower bound of Ω(Δ+lnn) on the running time of any randomized neighbor discovery algorithm. Our result thus implies that the ALOHA-like algorithm is at most a factor min(Δ,lnn) worse than optimal.


Efficient and Effective Duplicate Detection in Hierarchical Data
Although there is a long line of work on identifying duplicates in relational data, only a few solutions focus on duplicate detection in more complex hierarchical structures, like XML data. 
In this paper, we present a novel method for XML duplicate detection, called XMLDup. XMLDup uses a Bayesian network to determine the probability of two XML elements being duplicates, considering not only the information within the elements, but also the way that information is structured. In addition, to improve the efficiency of the network evaluation, a novel pruning strategy, capable of significant gains over the unoptimized version of the algorithm, is presented. 
Through experiments, we show that our algorithm is able to achieve high precision and recall scores in several data sets. XMLDup is also able to outperform another state-of-the-art duplicate detection solution, both in terms of efficiency and of effectiveness.


EMAP: Expedite Message Authentication Protocol for Vehicular Ad Hoc Networks
Vehicular ad hoc networks (VANETs) adopt the Public Key Infrastructure (PKI) and Certificate Revocation Lists (CRLs) for their security. In any PKI system, the authentication of a received message is performed by checking if the certificate of the sender is included in the current CRL, and verifying the authenticity of the certificate and signature of the sender. 
In this paper, we propose an Expedite Message Authentication Protocol (EMAP) for VANETs, which replaces the time-consuming CRL checking process by an efficient revocation checking process. The revocation check process in EMAP uses a keyed Hash Message Authentication Code ({HMAC}), where the key used in calculating the {HMAC} is shared only between nonrevoked On-Board Units (OBUs). In addition, EMAP uses a novel probabilistic key distribution, which enables nonrevoked OBUs to securely share and update a secret key. 
EMAP can significantly decrease the message loss ratio due to the message verification delay compared with the conventional authentication methods employing CRL. By conducting security analysis and performance evaluation, EMAP is demonstrated to be secure and efficient.


Enabling Dynamic Data and Indirect Mutual Trust for Cloud Computing Storage Systems
Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their sensitive data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end. 
A data owner pays for a desired level of security and must get some compensation in case of any misbehavior committed by the CSP. On the other hand, the CSP needs a protection from any false accusation that may be claimed by the owner to get illegal compensations. In this paper, we propose a cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables indirect mutual trust between them. 
The proposed scheme has four important features: (i) it allows the owner to outsource sensitive data to a CSP, and perform full block-level dynamic operations on the outsourced data, i.e., block modification, insertion, deletion, and append, (ii) it ensures that authorized users (i.e., those who have the right to access the owner's file) receive the latest version of the outsourced data, (iii) it enables indirect mutual trust between the owner and the CSP, and (iv) it allows the owner to grant or revoke access to the outsourced data. 
We discuss the security issues of the proposed scheme. Besides, we justify its performance through theoretical analysis and a prototype implementation on Amazon cloud platform to evaluate storage, communication, and computation overheads.


Entrusting Private Computation and Data to Untrusted Networks
We present sTile, a technique for distributing trust-needing computation onto insecure networks, while providing probabilistic guarantees that malicious agents that compromise parts of the network cannot learn private data. With sTile, we explore the fundamental cost of achieving privacy through data distribution and bound how much less efficient a privacy-preserving system is than a nonprivate one. 
This paper focuses specifically on NP-complete problems and demonstrates how sTile-based systems can solve important real-world problems, such as protein folding, image recognition, and resource allocation. 
We present the algorithms involved in sTile and formally prove that sTile-based systems preserve privacy. We develop a reference sTile-based implementation and empirically evaluate it on several physical networks of varying sizes, including the globally distributed PlanetLab testbed. 
Our analysis demonstrates sTile's scalability and ability to handle varying network delay, as well as verifies that problems requiring privacy-preservation can be solved using sTile orders of magnitude faster than using today's state-of-the-art alternatives


Facilitating Effective User Navigation through Website Structure Improvement
designing well-structured websites to facilitate effective user navigation has long been a challenge. A primary reason is that the web developers' understanding of how a website should be structured can be considerably different from that of the users. While various methods have been proposed to relink webpages to improve navigability using user navigation data, the completely reorganized new structure can be highly unpredictable, and the cost of disorienting users after the changes remains unanalyzed. 
This paper addresses how to improve a website without introducing substantial changes. Specifically, we propose a mathematical programming model to improve the user navigation on a website while minimizing alterations to its current structure. 
Results from extensive tests conducted on a publicly available real data set indicate that our model not only significantly improves the user navigation with very few changes, but also can be effectively solved. We have also tested the model on large synthetic data sets to demonstrate that it scales up very well. In addition, we define two evaluation metrics and use them to assess the performance of the improved website using the real data set. 
Evaluation results confirm that the user navigation on the improved structure is indeed greatly enhanced. More interestingly, we find that heavily disoriented users are more likely to benefit from the improved structure than the less disoriented users.


Fast Algorithms and Performance Bounds for Sum Rate Maximization in Wireless Networks
In this paper, we consider a wireless network where interference is treated as noise, and we study the nonconvex problem of sum rate maximization by power control. We focus on finding approximately optimal solutions that can be efficiently computed to this NP-hard problem by studying the solutions to two related problems, the sum rate maximization using a signal-to-interference-plus-noise ratio (SINR ) approximation and the max-min weighted SINR optimization. 
We show that these two problems are intimately connected, can be solved efficiently by algorithms with fast convergence and minimal parameter configuration, and can yield high-quality approximately optimal solutions to sum rate maximization in the low interference regime. 
As an application of these results, we analyze the connection-level stability of cross-layer utility maximization in the wireless network, where users arrive and depart randomly and are subject to congestion control, and the queue service rates at all the links are determined by the sum rate maximization problem. 
In particular, we determine the stability region when all the links solve the max-min weighted SINR problem, using instantaneous queue sizes as weights.


Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm
The problem of efficiently and securely broadcasting to a remote cooperative group occurs in many newly emerging networks. A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. 
The existing key management paradigms cannot deal with these challenges effectively. In this paper, we circumvent these obstacles and close this gap by proposing a novel key management paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key agreement. In such a system, each member maintains a single public/secret key pair. 
Upon seeing the public keys of the members, a remote sender can securely broadcast to any intended subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven secure in the standard model. Even if all the nonintended members collude, they cannot extract any useful information from the transmitted messages. 
After the public group encryption key is extracted, both the computation overhead and the communication cost are independent of the group size. Furthermore, our scheme facilitates simple yet efficient member deletion/addition and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and its implementation friendliness without relying on a fully trusted authority render our protocol a very promis


General Framework to Histogram-Shifting-Based Reversible Data Hiding
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. 
Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. 
In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.


Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image Segmentation
In this paper, we present an improved fuzzy C-means (FCM) algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric. The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their gray-level difference simultaneously. 
By using this factor, the new algorithm can accurately estimate the damping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers, we introduce a kernel distance measure to its objective function. The new algorithm adaptively determines the kernel parameter by using a fast bandwidth selection rule based on the distance variance of all data points in the collection. 
Furthermore, the tradeoff weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on synthetic and real images show that the new algorithm is effective and efficient, and is relatively independent of this type of noise


Improving Utilization of Infrastructure Clouds
A key advantage of infrastructure-as-a-service (IaaS) clouds is providing users on-demand access to resources. To provide on-demand access, however, cloud providers must either significantly overprovision their infrastructure (and pay a high price for operating resources with low utilization) or reject a large proportion of user requests (in which case the access is no longer on-demand). At the same time, not all users require truly on-demand access to resources. 
Many applications and workflows are designed for recoverable systems where interruptions in service are expected. For instance, many scientists utilize high-throughput computing (HTC)-enabled resources, such as Condor, where jobs are dispatched to available resources and terminated when the resource is no longer available. 
We propose a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by deploying backfill virtual machines (VMs). For demonstration and experimental evaluation, we extend the Nimbus cloud computing toolkit to deploy backfill VMs on idle cloud nodes for processing an HTC workload. 
Initial tests show an increase in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for processing the HTC workload. We demonstrate that a shared infrastructure between IaaS cloud providers and an HTC job management system can be highly beneficial to both the IaaS cloud provider and HTC users by increasing the utilization of the cloud infrastructure (thereby decreasing the overall cost) and contributing cycles that would otherwise be idle to processing HTC jobs.vvv


Joint Optimal Sensor Selection and Scheduling in Dynamic Spectrum Access Networks 
Spectrum sensing is key to the realization of dynamic spectrum access. To protect primary users' communications from the interference caused by secondary users, spectrum sensing must meet the strict detectability requirements set by regulatory bodies, such as the FCC. Such strict detection requirements, however, can hardly be achieved using PHY-layer sensing techniques alone with one-time sensing by only a single sensor. 
In this paper, we jointly exploit two MAC-layer sensing methods-cooperative sensing and sensing scheduling- to improve spectrum sensing performance, while incurring minimum sensing overhead. While these sensing methods have been studied individually, little has been done on their combinations and the resulting benefits. Specifically, we propose to construct a profile of the primary signal's RSSs and design a simple, yet near-optimal, incumbent detection rule. 
Based on this constructed RSS profile, we develop an algorithm to find 1) an optimal set of sensors; 2) an optimal point at which to stop scheduling additional sensing; and 3) an optimal sensing duration for one-time sensing, so as to make a tradeoff between detection performance and sensing overhead. 
Our evaluation results show that the proposed sensing algorithms reduce the sensing overhead by up to 65 percent, while meeting the requirements of both false-alarm and misdetection probabilities of less than 0.01.v


Key Challenges in Cloud Computing: Enabling the Future Internet of Services
Cloud computing will play a major role in the future Internet of Services, enabling on-demand provisioning of applications, platforms, and computing infrastructures. However, the cloud community must address several technology challenges to turn this vision into reality. 
Specific issues relate to deploying future infrastructure-as-a-service clouds and include efficiently managing such clouds to deliver scalable and elastic service platforms on demand, developing cloud aggregation architectures and technologies that let cloud providers collaborate and interoperate, and improving cloud infrastructures' security, reliability, and energy efficiency


Learning Dynamic Hybrid Markov Random Field for Image labeling
Using shape information has gained increasing concerns in the task of image labeling. In this paper, we present a dynamic hybrid Markov random field (DHMRF), which explicitly captures middle-level object shape and low-level visual appearance (e.g., texture and color) for image labeling. 
Each node in DHMRF is described by either a deformable template or an appearance model as visual prototype. On the other hand, the edges encode two types of intersections: co-occurrence and spatial layered context, with respect to the labels and prototypes of connected nodes. 
To learn the DHMRF model, an iterative algorithm is designed to automatically select the most informative features and estimate model parameters. The algorithm achieves high computational efficiency since a branch-and-bound schema is introduced to estimate model parameters. 
Compared with previous methods, which usually employ implicit shape cues, our DHMRF model seamlessly integrates color, texture, and shape cues to inference labeling output, and thus produces more accurate and reliable results. 
Extensive experiments validate its superiority over other state-of-the-art methods in terms of recognition accuracy and implementation efficiency on: the MSRC 21-class dataset, and the lotus hill institute 15-class dataset


Load Rebalancing for Distributed File Systems in Clouds
Distributed file systems are key building blocks for cloud computing applications based on the MapReduce programming paradigm. In such file systems, nodes simultaneously serve computing and storage functions; a file is partitioned into a number of chunks allocated in distinct nodes so that MapReduce tasks can be performed in parallel over the nodes. However, in a cloud computing environment, failure is the norm, and nodes may be upgraded, replaced, and added in the system. Files can also be dynamically created, deleted, and appended. 
This results in load imbalance in a distributed file system; that is, the file chunks are not distributed as uniformly as possible among the nodes. Emerging distributed file systems in production systems strongly depend on a central node for chunk reallocation. This dependence is clearly inadequate in a large-scale, failure-prone environment because the central load balancer is put under considerable workload that is linearly scaled with the system size, and may thus become the performance bottleneck and the single point of failure. 
In this paper, a fully distributed load rebalancing algorithm is presented to cope with the load imbalance problem. Our algorithm is compared against a centralized approach in a production system and a competing distributed solution presented in the literature. 
The simulation results indicate that our proposal is comparable with the existing centralized approach and considerably outperforms the prior distributed algorithm in terms of load imbalance factor, movement cost, and algorithmic overhead. The performance of our proposal implemented in the Hadoop distributed file system is further investigated in a cluster environment.


Local Directional Number Pattern for Face Analysis: Face and Expression Recognition 
This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods. 
We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions. 
We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. 
We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations. Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks


Local Structure-Based Image Decomposition for Feature Extraction With Applications to Face Recognition 
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. 
This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. 
All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. 
The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.


Localization of Wireless Sensor Networks in the Wild: Pursuit of Ranging Quality 
Localization is a fundamental issue of wireless sensor networks that has been extensively studied in the literature. Our real-world experience from GreenOrbs, a sensor network system deployed in a forest, shows that localization in the wild remains very challenging due to various interfering factors. 
In this paper, we propose CDL, a Combined and Differentiated Localization approach for localization that exploits the strength of range-free approaches and range-based approaches using received signal strength indicator (RSSI). A critical observation is that ranging quality greatly impacts the overall localization accuracy. 
To achieve a better ranging quality, our method CDL incorporates virtual-hop localization, local filtration, and ranging-quality aware calibration. We have implemented and evaluated CDL by extensive real-world experiments in GreenOrbs and large-scale simulations. 
Our experimental and simulation results demonstrate that CDL outperforms current state-of-art localization approaches with a more accurate and consistent performance. For example, the average location error using CDL in GreenOrbs system is 2.9 m, while the previous best method SISR has an average error of 4.6 m.


Location-Aware and Safer Cards: Enhancing RFID Security and Privacy via Location Sensing
In this paper, we report on a new approach for enhancing security and privacy in certain RFID applications whereby location or location-related information (such as speed) can serve as a legitimate access context. Examples of these applications include access cards, toll cards, credit cards, and other payment tokens. 
We show that location awareness can be used by both tags and back-end servers for defending against unauthorized reading and relay attacks on RFID systems. On the tag side, we design a location-aware selective unlocking mechanism using which tags can selectively respond to reader interrogations rather than doing so promiscuously. 
On the server side, we design a location-aware secure transaction verification scheme that allows a bank server to decide whether to approve or deny a payment transaction and detect a specific type of relay attack involving malicious readers. The premise of our work is a current technological advancement that can enable RFID tags with low-cost location (GPS) sensing capabilities. 
Unlike prior research on this subject, our defenses do not rely on auxiliary devices or require any explicit user involvement.


Mining Contracts for Business Events and Temporal Constraints in Service Engagements
Contracts are legally binding descriptions of business service engagements. In particular, we consider business events as elements of a service engagement. Business events such as purchase, delivery, bill payment, bank interest accrual not only correspond to essential processes but are also inherently temporally constrained. 
Identifying and understanding the events and their temporal relationships can help a business partner determine what to deliver and expect from others as it participates in the specified service engagement. However, contracts are expressed in unstructured text and their insights are buried therein. 
Our contributions are threefold. We develop a novel approach employing a hybrid of surface patterns, grammar parsing, and classification to (1) extract business events and (2) their temporal constraints from contract text. We use topic modeling to (3) automatically organize the event terms into clusters. 
An evaluation on a real-life contract dataset demonstrates the viability and promise of our hybrid approach, yielding an F-measure of 0.89 in event extraction and 0.90 in temporal constraints extraction. The topic model yields event term clusters with an average match of 85% between two independent human annotations and an expert-assigned set of class labels for the clusters.


Mining User Queries with Markov Chains: Application to Online Image Retrieval
We propose a novel method for automatic annotation, indexing and annotation-based retrieval of images. The new method, that we call Markovian Semantic Indexing (MSI), is presented in the context of an online image retrieval system. Assuming such a system, the users' queries are used to construct an Aggregate Markov Chain (AMC) through which the relevance between the keywords seen by the system is defined. The users' queries are also used to automatically annotate the images. 
A stochastic distance between images, based on their annotation and the keyword relevance captured in the AMC, is then introduced. Geometric interpretations of the proposed distance are provided and its relation to a clustering in the keyword space is investigated. By means of a new measure of Markovian state similarity, the mean first cross passage time (CPT), optimality properties of the proposed distance are proved. Images are modeled as points in a vector space and their similarity is measured with MSI. 
The new method is shown to possess certain theoretical advantages and also to achieve better Precision versus Recall results when compared to Latent Semantic Indexing (LSI) and probabilistic Latent Semantic Indexing (pLSI) methods in Annotation-Based Image Retrieval (ABIR) tasks.


Mobile Relay Configuration in Data-Intensive Wireless Sensor networks
Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies. 
We propose using low-cost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. 
Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. 
This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.v


Mobi-Sync: Efficient Time Synchronization for Mobile Underwater Sensor Networks
Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies. 
We propose using low-cost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. 
Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. 
This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.


Multiparty Access Control for Online Social Networks: Model and Mechanisms
Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. 
To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. 
Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.


Multi-View Video Representation Based on Fast Monte Carlo Surface Reconstruction
This paper provides an alternative solution to the costly representation of multi-view video data, which can be used for both rendering and scene analyses. Initially, a new efficient Monte Carlo discrete surface reconstruction method for foreground objects with static background is presented, which outperforms volumetric techniques and is suitable for GPU environments. 
Some extensions are also presented, which allow a speeding up of the reconstruction by exploiting multi-resolution and temporal correlations. Then, a fast meshing algorithm is applied, which allows interpolating a continuous surface from the discrete reconstructed points. 
As shown by the experimental results, the original video frames can be approximated with high accuracy by projecting the reconstructed foreground objects onto the original viewpoints. Furthermore, the reconstructed scene can be easily projected onto any desired virtual viewpoint, thus simplifying the design of free-viewpoint video applications. 
In our experimental results, we show that our techniques for reconstruction and meshing compare favorably with the state-of-the-art, and we also introduce a rule-of-thumb for effective application of the method with a good quality versus representation cost trade-off.


Network Traffic Classification Using Correlation Information
Traffic classification has wide applications in network management, from security monitoring to quality of service measurements. Recent research tends to apply machine learning techniques to flow statistical feature based classification methods. 
The nearest neighbor (NN)-based method has exhibited superior classification performance. It also has several important advantages, such as no requirements of training procedure, no risk of overfitting of parameters, and naturally being able to handle a huge number of classes. However, the performance of NN classifier can be severely affected if the size of training data is small. 
In this paper, we propose a novel nonparametric approach for traffic classification, which can improve the classification performance effectively by incorporating correlated information into the classification process. We analyze the new classification approach and its performance benefit from both theoretical and empirical perspectives. 
A large number of experiments are carried out on two real-world traffic data sets to validate the proposed approach. The results show the traffic classification performance can be improved significantly even under the extreme difficult circumstance of very few training samples.


Network-Assisted Mobile Computing with Optimal Uplink Query Processing
Many mobile applications retrieve content from remote servers via user generated queries. Processing these queries is often needed before the desired content can be identified. Processing the request on the mobile devices can quickly sap the limited battery resources. Conversely, processing user queries at remote servers can have slow response times due communication latency incurred during transmission of the potentially large query. 
We evaluate a network-assisted mobile computing scenario where mid-network nodes with “leasing” capabilities are deployed by a service provider. Leasing computation power can reduce battery usage on the mobile devices and improve response times. However, borrowing processing power from mid-network nodes comes at a leasing cost which must be accounted for when making the decision of where processing should occur. 
We study the tradeoff between battery usage, processing and transmission latency, and mid-network leasing. We use the dynamic programming framework to solve for the optimal processing policies that suggest the amount of processing to be done at each mid-network node in order to minimize the processing and communication latency and processing costs. Through numerical studies, we examine the properties of the optimal processing policy and the core tradeoffs in such systems.


NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems
Cloud security is one of most important issues that has attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). 
DDoS attacks usually involve early stage actions such as multistep exploitation, low-frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. 
This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multiphase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph-based analytical models and reconfigurable virtual network-based countermeasures. 
The proposed framework leverages OpenFlow network programming APIs to build a monitor and control plane over distributed programmable virtual switches to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.


Noise Reduction Based on Partial-Reference, Dual-Tree Complex Wavelet Transform Shrinkage
This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution. 
To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image. 
Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized. 
The result is a map of the directional structures present in the non-enhanced image. Said map is then used to shrink the coefficients of the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced image is computed via the inverse transforms. A thorough numerical analysis of the results has been performed in order to confirm the validity of the proposed approach.


On Quality of Monitoring for Multi-channel Wireless Infrastructure Networks
Passive monitoring utilizing distributed wireless sniffers is an effective technique to monitor activities in wireless infrastructure networks for fault diagnosis, resource management and critical path analysis. 
In this paper, we introduce a quality of monitoring (QoM) metric defined by the expected number of active users monitored, and investigate the problem of maximizing QoM by judiciously assigning sniffers to channels based on the knowledge of user activities in a multi-channel wireless network. Two types of capture models are considered. 
The user-centric model assumes frame-level capturing capability of sniffers such that the activities of different users can be distinguished while the sniffer-centric model only utilizes the binary channel information (active or not) at a sniffer. For the user-centric model, we show that the implied optimization problem is NP-hard, but a constant approximation ratio can be attained via polynomial complexity algorithms. 
For the sniffer-centric model, we devise stochastic inference schemes to transform the problem into the user-centric domain, where we are able to apply our polynomial approximation algorithms. The effectiveness of our proposed schemes and algorithms is further evaluated using both synthetic data as well as real-world traces from an operational WLAN.


On the Discovery of Critical Links and Nodes for Assessing Network Vulnerability
The assessment of network vulnerability is of great importance in the presence of unexpected disruptive events or adversarial attacks targeting on critical network links and nodes. In this paper, we study Critical Link Disruptor (CLD) and Critical Node Disruptor (CND) optimization problems to identify critical links and nodes in a network whose removals maximally destroy the network's functions. 
We provide a comprehensive complexity analysis of CLD and CND on general graphs and show that they still remain NP-complete even on unit disk graphs and power-law graphs. 
Furthermore, the CND problem is shown NP-hard to be approximated within Ω([(n-k)/(nε)] ) on general graphs with n vertices and k critical nodes. Despite the intractability of these problems, we propose HILPR, a novel LP-based rounding algorithm, for efficiently solving CLD and CND problems in a timely manner. The effectiveness of our solutions is validated on various synthetic and real-world networks.


On the Privacy Risks of Virtual Keyboards: Automatic Reconstruction of Typed Input from Compromising Reflections
We investigate the implications of the ubiquity of personal mobile devices and reveal new techniques for compromising the privacy of users typing on virtual keyboards. Specifically, we show that so-called compromising reflections (in, for example, a victim's sunglasses) of a device's screen are sufficient to enable automated reconstruction, from video, of text typed on a virtual keyboard. 
Through the use of advanced computer vision and machine learning techniques, we are able to operate under extremely realistic threat models, in real-world operating conditions, which are far beyond the range of more traditional OCR-based attacks. In particular, our system does not require expensive and bulky telescopic lenses: rather, we make use of off-the-shelf, handheld video cameras. 
In addition, we make no limiting assumptions about the motion of the phone or of the camera, nor the typing style of the user, and are able to reconstruct accurate transcripts of recorded input, even when using footage captured in challenging environments (e.g., on a moving bus). 
To further underscore the extent of this threat, our system is able to achieve accurate results even at very large distances-up to 61 m for direct surveillance, and 12 m for sunglass reflections. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies


Optimal Multicast Capacity and Delay Tradeoffs in MANETs
We investigate the implications of the ubiquity of personal mobile devices and reveal new techniques for compromising the privacy of users typing on virtual keyboards. Specifically, we show that so-called compromising reflections (in, for example, a victim's sunglasses) of a device's screen are sufficient to enable automated reconstruction, from video, of text typed on a virtual keyboard. 
Through the use of advanced computer vision and machine learning techniques, we are able to operate under extremely realistic threat models, in real-world operating conditions, which are far beyond the range of more traditional OCR-based attacks. In particular, our system does not require expensive and bulky telescopic lenses: rather, we make use of off-the-shelf, handheld video cameras. 
In addition, we make no limiting assumptions about the motion of the phone or of the camera, nor the typing style of the user, and are able to reconstruct accurate transcripts of recorded input, even when using footage captured in challenging environments (e.g., on a moving bus). 
To further underscore the extent of this threat, our system is able to achieve accurate results even at very large distances-up to 61 m for direct surveillance, and 12 m for sunglass reflections. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies.


Optimal Multiserver Configuration for Profit Maximization in Cloud Computing
As cloud computing becomes more and more popular, understanding the economics of cloud computing becomes critically important. To maximize the profit, a service provider should understand both service charges and business costs, and how they are determined by the characteristics of the applications and the configuration of a multiserver system. 
The problem of optimal multiserver configuration for profit maximization in a cloud computing environment is studied. Our pricing model takes such factors into considerations as the amount of a service, the workload of an application environment, the configuration of a multiserver system, the service-level agreement, the satisfaction of a consumer, the quality of a service, the penalty of a low-quality service, the cost of renting, the cost of energy consumption, and a service provider's margin and profit. 
Our approach is to treat a multiserver system as an M/M/m queuing model, such that our optimization problem can be formulated and solved analytically. Two server speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. The probability density function of the waiting time of a newly arrived service request is derived. 
The expected service charge to a service request is calculated. The expected net business gain in one unit of time is obtained. Numerical calculations of the optimal server size and the optimal server speed are demonstrated.


Predicting Architectural Vulnerability on Multithreaded Processors under Resource Contention and Sharing
Architectural vulnerability factor (AVF) characterizes a processor's vulnerability to soft errors. Interthread resource contention and sharing on a multithreaded processor (e.g., SMT, CMP) shows nonuniform impact on a program's AVF when it is co-scheduled with different programs. However, measuring the AVF is extremely expensive in terms of hardware and computation. 
This paper proposes a scalable two-level predictive mechanism capable of predicting a program's AVF on a SMT/CMP architecture from easily measured metrics. Essentially, the first-level model correlates the AVF in a contention-free environment with important performance metrics and the processor configuration, while the second-level model captures the interthread resource contention and sharing via processor structures' occupancies. 
By utilizing the proposed scheme, we can accurately estimate any unseen program's soft error vulnerability under resource contention and sharing with any other program(s), on an arbitrarily configured multithreaded processor. In practice, the proposed model can be used to find soft error resilient thread-to-core scheduling for multithreaded processors


Predicting the Impact of Measures Against P2P Networks: Transient Behavior and Phase Transition
The paper has two objectives. The first is to study rigorously the transient behavior of some peer-to-peer (P2P) networks whenever information is replicated and disseminated according to epidemic-like dynamics. The second is to use the insight gained from the previous analysis in order to predict how efficient are measures taken against P2P networks. 
We first introduce a stochastic model that extends a classical epidemic model and characterize the P2P swarm behavior in presence of free-riding peers. We then study a second model in which a peer initiates a contact with another peer chosen randomly. In both cases, the network is shown to exhibit phase transitions: A small change in the parameters causes a large change in the behavior of the network. 
We show, in particular, how phase transitions affect measures of content providers against P2P networks that distribute nonauthorized music, books, or articles and what is the efficiency of countermeasures. In addition, our analytical framework can be generalized to characterize the heterogeneity of cooperative peers


Privacy Preserving Delegated Access Control in Public Clouds
Current approaches to enforce fine-grained access control on confidential data hosted in the cloud are based on fine-grained encryption of the data. Under such approaches, data owners are in charge of encrypting the data before uploading them on the cloud and re-encrypting the data whenever user credentials change. Data owners thus incur high communication and computation costs. 
A better approach should delegate the enforcement of fine-grained access control to the cloud, so to minimize the overhead at the data owners, while assuring data confidentiality from the cloud. We propose an approach, based on two layers of encryption, that addresses such requirement. Under our approach, the data owner performs a coarse-grained encryption, whereas the cloud performs a fine-grained encryption on top of the owner encrypted data. 
A challenging issue is how to decompose access control policies (ACPs) such that the two layer encryption can be performed.We show that this problem is NP-complete and propose novel optimization algorithms. We utilize an efficient group key management scheme that supports expressive ACPs. Our system assures the confidentiality of the data and preserves the privacy of users from the cloud while delegating most of the access control enforcement to the cloud.


Privacy-Preserving Public Auditing for Secure Cloud Storage
Using cloud storage, users can remotely store their data and enjoy the on-demand high-quality applications and services from a shared pool of configurable computing resources, without the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in cloud computing a formidable task, especially for users with constrained computing resources. 
Moreover, users should be able to just use the cloud storage as if it is local, without worrying about the need to verify its integrity. Thus, enabling public auditability for cloud storage is of critical importance so that users can resort to a third-party auditor (TPA) to check the integrity of outsourced data and be worry free. 
To securely introduce an effective TPA, the auditing process should bring in no new vulnerabilities toward user data privacy, and introduce no additional online burden to user. In this paper, we propose a secure cloud storage system supporting privacy-preserving public auditing. We further extend our result to enable the TPA to perform audits for multiple users simultaneously and efficiently. 
Extensive security and performance analysis show the proposed schemes are provably secure and highly efficient. Our preliminary experiment conducted on Amazon EC2 instance further demonstrates the fast performance of the design.


Randomized Information Dissemination in Dynamic Environments
We consider randomized broadcast or information dissemination in wireless networks with switching network topologies. We show that an upper bound for the ε-dissemination time consists of the conductance bound for a network without switching, and an adjustment that accounts for the number of informed nodes in each period between topology changes. Through numerical simulations, we show that our bound is asymptotically tight. 
We apply our results to the case of mobile wireless networks with unreliable communication links and establish an upper bound for the dissemination time when the network undergoes topology changes and periods of communication link erasures.


Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks
In this paper we propose redundancy management of heterogeneous wireless sensor networks (HWSNs), utilizing multipath routing to answer user queries in the presence of unreliable and malicious nodes. 
The key concept of our redundancy management is to exploit the tradeoff between energy consumption vs. the gain in reliability, timeliness, and security to maximize the system useful lifetime. We formulate the tradeoff as an optimization problem for dynamically determining the best redundancy level to apply to multipath routing for intrusion tolerance so that the query response success probability is maximized while prolonging the useful lifetime. 
Furthermore, we consider this optimization problem for the case in which a voting-based distributed intrusion detection algorithm is applied to detect and evict malicious nodes in a HWSN. We develop a novel probability model to analyze the best redundancy level in terms of path redundancy and source redundancy, as well as the best intrusion detection settings in terms of the number of voters and the intrusion invocation interval under which the lifetime of a HWSN is maximized. 
We then apply the analysis results obtained to the design of a dynamic redundancy management algorithm to identify and apply the best design parameter settings at runtime in response to environment changes, to maximize the HWSN lifetime.


Regional Bit Allocation and Rate Distortion Optimization for Multiview Depth Video Coding With View Synthesis Distortion Model 
In this paper, we propose a view synthesis distortion model (VSDM) that establishes the relationship between depth distortion and view synthesis distortion for the regions with different characteristics: color texture area corresponding depth (CTAD) region and color smooth area corresponding depth (CSAD), respectively. 
With this VSDM, we propose regional bit allocation (RBA) and rate distortion optimization (RDO) algorithms for multiview depth video coding (MDVC) by allocating more bits on CTAD for rendering quality and fewer bits on CSAD for compression efficiency. 
Experimental results show that the proposed VSDM based RBA and RDO can improve the coding efficiency significantly for the test sequences. In addition, for the proposed overall MDVC algorithm that integrates VSDM based RBA and RDO, it achieves 9.99% and 14.51% bit rate reduction on average for the high and low bit rate, respectively. 
It can improve virtual view image quality 0.22 and 0.24 dB on average at the high and low bit rate, respectively, when compared with the original joint multiview video coding model. The RD performance comparisons using five different metrics also validate the effectiveness of the proposed overall algorithm. In addition, the proposed algorithms can be applied to both INTRA and INTER frames.


Rotation Invariant Local Frequency Descriptors for Texture Classification
This paper presents a novel rotation invariant method for texture classification based on local frequency components. The local frequency components are computed by applying 1-D Fourier transform on a neighboring function defined on a circle of radius R at each pixel. We observed that the low frequency components are the major constituents of the circular functions and can effectively represent textures. 
Three sets of features are extracted from the low frequency components, two based on the phase and one based on the magnitude. The proposed features are invariant to rotation and linear changes of illumination. Moreover, by using low frequency components, the proposed features are very robust to noise. 
While the proposed method uses a relatively small number of features, it outperforms state-of-the-art methods in three well-known datasets: Brodatz, Outex, and CUReT. In addition, the proposed method is very robust to noise and can remarkably improve the classification accuracy especially in the presence of high levels of noise.


Scable, and Secure Sharing of Personal Health Records in Cloud Computing Using Attribute-Based Encryption
Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. 
To assure the patients' control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access, and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. 
In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semitrusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute-based encryption (ABE) techniques to encrypt each patient's PHR file. 
Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multiauthority ABE. 
Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability, and efficiency of our proposed scheme.


Scalable Coding of Depth Maps With R-D Optimized Embedding
Recent work on depth map compression has revealed the importance of incorporating a description of discontinuity boundary geometry into the compression scheme. We propose a novel compression strategy for depth maps that incorporates geometry information while achieving the goals of scalability and embedded representation. 
Our scheme involves two separate image pyramid structures, one for breakpoints and the other for sub-band samples produced by a breakpoint-adaptive transform. Breakpoints capture geometric attributes, and are amenable to scalable coding. We develop a rate-distortion optimization framework for determining the presence and precision of breakpoints in the pyramid representation. 
We employ a variation of the EBCOT scheme to produce embedded bit-streams for both the breakpoint and sub-band data. Compared to JPEG 2000, our proposed scheme enables the same the scalability features while achieving substantially improved rate-distortion performance at the higher bit-rate range and comparable performance at the lower rates


Secure and Efficient Data Transmission for Cluster-based Wireless Sensor Network
Secure data transmission is a critical issue for wireless sensor networks (WSNs). Clustering is an effective and practical way to enhance the system performance of WSNs. In this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the clusters are formed dynamically and periodically. 
We propose two Secure and Efficient data Transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the Identity-Based digital Signature (IBS) scheme and the Identity-Based Online/Offline digital Signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the Diffie-Hellman problem in the pairing domain. SET-IBOOS further reduces the computation overhead for protocol security, which is crucial for WSNs, while its security relies on the hardness of the discrete logarithm problem. 
We show the feasibility of the SET-IBS and SET-IBOOS protocols with respect to the security requirements and security analysis against various attacks. The calculations and simulations are provided to illustrate the efficiency of the proposed protocols. 
The results show that, the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption.


Secure Encounter-based Mobile Social Networks Requirements Designs and Tradeoffs
Encounter-based social networks link users who share a location at the same time, as opposed to traditional social network paradigms of linking users who have an offline friendship. This approach presents fundamentally different challenges from those tackled by previous designs. 
In this paper, we explore functional and security requirements for these new systems, such as availability, security, and privacy, and present several design options for building secure encounter-based social networks. We examine one recently proposed encounter-based social network design and compare it to a set of idealized security and functionality requirements. 
We show that it is vulnerable to several attacks, including impersonation, collusion, and privacy breaching, even though it was designed specifically for security. Mindful of the possible pitfalls, we construct a flexible framework for secure encounter-based social networks, which can be used to construct networks that offer different security, privacy, and availability guarantees. 
We describe two example constructions derived from this framework, and consider each in terms of the ideal requirements. Some of our new designs fulfill more requirements in terms of system security, reliability, and privacy than previous work. We also evaluate real-world performance of one of our designs by implementing a proof-of-concept iPhone application called MeetUp. Experiments highlight the potential of our system.


Security and Privacy-Enhancing Multi-cloud Architectures
Security challenges are still among the biggest obstacles when considering the adoption of cloud services. This triggered a lot of research activities, resulting in a quantity of proposals targeting the various cloud security threats. 
Alongside with these security issues, the cloud paradigm comes with a new set of unique features, which open the path toward novel security approaches, techniques, and architectures. 
This paper provides a survey on the achievable security merits by making use of multiple distinct clouds simultaneously. Various distinct architectures are introduced and discussed according to their security and privacy capabilities and prospects.


SORT: A Self-Organizing Trust Model for Peer-to-Peer Systems
Open nature of peer-to-peer systems exposes them to malicious activity. Building trust relationships among peers can mitigate attacks of malicious peers. This paper presents distributed algorithms that enable a peer to reason about trustworthiness of other peers based on past interactions and recommendations. 
Peers create their own trust network in their proximity by using local information available and do not try to learn global trust information. Two contexts of trust, service, and recommendation contexts, are defined to measure trustworthiness in providing services and giving recommendations. 
Interactions and recommendations are evaluated based on importance, recentness, and peer satisfaction parameters. Additionally, recommender's trustworthiness and confidence about a recommendation are considered while evaluating recommendations. 
Simulation experiments on a file sharing application show that the proposed model can mitigate attacks on 16 different malicious behavior models. In the experiments, good peers were able to form trust relationships in their proximity and isolate malicious peers.


Self-Supervised Online Metric Learning With Low Rank Constraint for Scene Categorization
Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. 
In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. 
By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. 
With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.


SPOC: A Secure and Privacy-Preserving Opportunistic Computing Framework for Mobile-Healthcare Emergency
With the pervasiveness of smart phones and the advance of wireless body sensor networks (BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. 
In this paper, we propose a secure and privacy-preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency. With SPOC, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive personal health information (PHI) during m-Healthcare emergency with minimal privacy disclosure.
In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient user-centric privacy access control in SPOC framework, which is based on an attribute-based access control and a new privacy-preserving scalar product computation (PPSPC) technique, and allows a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. Detailed security analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy access control in m-Healthcare emergency. 
In addition, performance evaluations via extensive simulations demonstrate the SPOC's effectiveness in term of providing high-reliable-PHI process and transmission while minimizing the privacy disclosure during m-Healthcare emergency


Target Tracking and Mobile Sensor Navigation in Wireless Sensor Networks
This work studies the problem of tracking signal-emitting mobile targets using navigated mobile sensors based on signal reception. Since the mobile target's maneuver is unknown, the mobile sensor controller utilizes the measurement collected by a wireless sensor network in terms of the mobile target signal's time of arrival (TOA). 
The mobile sensor controller acquires the TOA measurement information from both the mobile target and the mobile sensor for estimating their locations before directing the mobile sensor's movement to follow the target. We propose a min-max approximation approach to estimate the location for tracking which can be efficiently solved via semidefinite programming (SDP) relaxation, and apply a cubic function for mobile sensor navigation. 
We estimate the location of the mobile sensor and target jointly to improve the tracking accuracy. To further improve the system performance, we propose a weighted tracking algorithm by using the measurement information more efficiently. 
Our results demonstrate that the proposed algorithm provides good tracking performance and can quickly direct the mobile sensor to follow the mobile target.


Toward Accurate Mobile Sensor Network Localization in Noisy Environments
The node localization problem in mobile sensor networks has received significant attention. Recently, particle filters adapted from robotics have produced good localization accuracies in conventional settings. 
In spite of these successes, state-of-the-art solutions suffer significantly when used in challenging indoor and mobile environments characterized by a high degree of radio signal irregularity. New solutions are needed to address these challenges. 
We propose a fuzzy logic-based approach for mobile node localization in challenging environments. Localization is formulated as a fuzzy multilateration problem. For sparse networks with few available anchors, we propose a fuzzy grid-prediction scheme. 
The fuzzy logic-based localization scheme is implemented in a simulator and compared to state-of-the-art solutions. Extensive simulation results demonstrate improvements in the localization accuracy from 20 to 40 percent when the radio irregularity is high. 
A hardware implementation running on Epic motes and transported by iRobot mobile hosts confirms simulation results and extends them to the real world.


Toward Fine-Grained, Unsupervised, Scalable Performance Diagnosis for Production Cloud Computing Systems 
Performance diagnosis is labor intensive in production cloud computing systems. Such systems typically face many real-world challenges, which the existing diagnosis techniques for such distributed systems cannot effectively solve. 
An efficient, unsupervised diagnosis tool for locating fine-grained performance anomalies is still lacking in production cloud computing systems. This paper proposes CloudDiag to bridge this gap. Combining a statistical technique and a fast matrix recovery algorithm, CloudDiag can efficiently pinpoint fine-grained causes of the performance problems, which does not require any domain-specific knowledge to the target system. 
CloudDiag has been applied in a practical production cloud computing systems to diagnose performance problems. We demonstrate the effectiveness of CloudDiag in three real-world case stud


Toward Secure Multikeyword Top-k Retrieval over Encrypted Cloud Data
Cloud computing has emerging as a promising pattern for data outsourcing and high-quality data services. However, concerns of sensitive information on cloud potentially causes privacy problems. Data encryption protects data security to some extent, but at the cost of compromised efficiency. Searchable symmetric encryption (SSE) allows retrieval of encrypted data over cloud. 
In this paper, we focus on addressing data privacy issues using SSE. For the first time, we formulate the privacy issue from the aspect of similarity relevance and scheme robustness. We observe that server-side ranking based on order-preserving encryption (OPE) inevitably leaks data privacy. 
To eliminate the leakage, we propose a two-round searchable encryption (TRSE) scheme that supports top-$(k)$ multikeyword retrieval. In TRSE, we employ a vector space model and homomorphic encryption. The vector space model helps to provide sufficient search accuracy, and the homomorphic encryption enables users to involve in the ranking while the majority of computing work is done on the server side by operations only on ciphertext. 
As a result, information leakage can be eliminated and data security is ensured. Thorough security and performance analysis show that the proposed scheme guarantees high security and practical efficiency.


TrustedDB: A Trusted Hardware Based Database with Privacy and Data Confidentiality
Traditionally, as soon as confidentiality becomes a concern, data is encrypted before outsourcing to a service provider. Any software-based cryptographic constructs then deployed, for server-side query processing on the encrypted data, inherently limit query expressiveness. 
Here, we introduce TrustedDB, an outsourced database prototype that allows clients to execute SQL queries with privacy and under regulatory compliance constraints by leveraging server-hosted, tamper-proof trusted hardware in critical query processing stages, thereby removing any limitations on the type of supported queries. 
Despite the cost overhead and performance limitations of trusted hardware, we show that the costs per query are orders of magnitude lower than any (existing or) potential future software-only mechanisms. TrustedDB is built and runs on actual hardware, and its performance and costs are evaluated here


Two-Dimensional Orthogonal DCT Expansion in Trapezoid and Triangular Blocks and Modified JPEG Image Compression 
In the conventional JPEG algorithm, an image is divided into eight by eight blocks and then the 2-D DCT is applied to encode each block. In this paper, we find that, in addition to rectangular blocks, the 2-D DCT is also orthogonal in the trapezoid and triangular blocks. 
Therefore, instead of eight by eight blocks, we can generalize the JPEG algorithm and divide an image into trapezoid and triangular blocks according to the shapes of objects and achieve higher compression ratio. Compared with the existing shape adaptive compression algorithms, as we do not try to match the shape of each object exactly, the number of bytes used for encoding the edges can be less and the error caused from the high frequency component at the boundary can be avoided. 
The simulations show that, when the bit rate is fixed, our proposed algorithm can achieve higher PSNR than the JPEG algorithm and other shape adaptive algorithms. Furthermore, in addition to the 2-D DCT, we can also use our proposed method to generate the 2-D complete and orthogonal sine basis, Hartley basis, Walsh basis, and discrete polynomial basis in a trapezoid or a triangular block


Understanding the Scheduling Performance in Wireless Networks with Successive Interference Cancellation
Successive interference cancellation (SIC) is an effective way of multipacket reception to combat interference in wireless networks. We focus on link scheduling in wireless networks with SIC, and propose a layered protocol model and a layered physical model to characterize the impact of SIC. 
In both the interference models, we show that several existing scheduling schemes achieve the same order of approximation ratios, independent of whether or not SIC is available. Moreover, the capacity order in a network with SIC is the same as that without SIC. We then examine the impact of SIC from first principles. 
In both chain and cell topologies, SIC does improve the throughput with a gain between 20 and 100 percent. However, unless SIC is properly characterized, any scheduling scheme cannot effectively utilize the new transmission opportunities. 
The results indicate the challenge of designing an SIC-aware scheduling scheme, and suggest that the approximation ratio is insufficient to measure the scheduling performance when SIC is available..



FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


SOFTWARE PROJECTS IN
Java, J2EE, J2ME, JavaFx, DotNET, ASP.NET, VB.NET, C#, PHP, NS2, Matlab, Android
For Software Projects - 044-28235816, 9841193224
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or

Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…



DotNET Project Titles, DotNET Project Abstracts, DotNET IEEE Project Abstracts, DotNET Projects abstracts CSE IT MCA, DotNET Titles, Download DotNET Project Abstracts, Download IEEE DotNET Abstracts