Wednesday, July 3, 2013

Android Project Titles, Android Project Abstracts, Android IEEE Project Abstracts, Android Projects abstracts for CSE IT MCA, Download Android Titles, Download Android Project Abstracts, Download IEEE Android Abstracts

ANDROID PROJECTS - ABSTRACTS
A Scalable Server Architecture for Mobile Presence Services in Social Network Applications
Social network applications are becoming increasingly popular on mobile devices. A mobile presence service is an essential component of a social network application because it maintains each mobile user's presence information, such as the current status (online/offline), GPS location and network address, and also updates the user's online friends with the information continually. 
If presence updates occur frequently, the enormous number of messages distributed by presence servers may lead to a scalability problem in a large-scale mobile presence service. To address the problem, we propose an efficient and scalable server architecture, called Presence Cloud, which enables mobile presence services to support large-scale social network applications. 
When a mobile user joins a network, Presence Cloud searches for the presence of his/her friends and notifies them of his/her arrival. Presence Cloud organizes presence servers into a quorum-based server-to-server architecture for efficient presence searching. It also leverages a directed search algorithm and a one-hop caching strategy to achieve small constant search latency. 
We analyze the performance of Presence Cloud in terms of the search cost and search satisfaction level. The search cost is defined as the total number of messages generated by the presence server when a user arrives; and search satisfaction level is defined as the time it takes to search for the arriving user's friend list. The results of simulations demonstrate that Presence Cloud achieves performance gains in the search cost without compromising search satisfaction.


A Fast Clustering- Based Feature Subset Selection Algorithm for High- Dimensional Data
Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. 
While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. 
The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent, the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. 
To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST) clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probability based Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. 
The results, on 35 publicly available real-world high-dimensional image, microarray, and text data, demonstrate that the FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers


A Generalized Flow-based Method for Analysis of Implicit Relationships on Wikipedia
ABSTRACT
We focus on measuring relationships between pairs of objects in Wikipedia whose pages can be regarded as individual objects. Two kinds of relationships between two objects exist: in Wikipedia, an explicit relationship is represented by a single link between the two pages for the objects, and an implicit relationship is represented by a link structure containing the two pages. 
Some of the previously proposed methods for measuring relationships are cohesion-based methods, which underestimate objects having high degrees, although such objects could be important in constituting relationships in Wikipedia. 
The other methods are inadequate for measuring implicit relationships because they use only one or two of the following three important factors: distance, connectivity, and cocitation. We propose a new method using a generalized maximum flow which reflects all the three factors and does not underestimate objects having high degree. 
We confirm through experiments that our method can measure the strength of a relationship more appropriately than these previously proposed methods do. Another remarkable aspect of our method is mining elucidatory objects, that is, objects constituting a relationship. We explain that mining elucidatory objects would open a novel way to deeply understand a


A Proxy-Based Approach to Continuous Location-Based Spatial Queries in Mobile Environments
Abstract: Caching valid regions of spatial queries at mobile clients is effective in reducing the number of queries submitted by mobile clients and query load on the server. However, mobile clients suffer from longer waiting time for the server to compute valid regions. We propose in this paper a proxy-based approach to continuous nearest-neighbor (NN) and window queries. 
The proxy creates estimated valid regions (EVRs) for mobile clients by exploiting spatial and temporal locality of spatial queries. For NN queries, we devise two new algorithms to accelerate EVR growth, leading the proxy to build effective EVRs even when the cache size is small. On the other hand, we propose to represent the EVRs of window queries in the form of vectors, called estimated window vectors (EWVs), to achieve larger estimated valid regions. 
This novel representation and the associated creation algorithm result in more effective EVRs of window queries. In addition, due to the distinct characteristics, we use separate index structures, namely EVR-tree and grid index, for NN queries and window queries, respectively. 
To further increase efficiency, we develop algorithms to exploit the results of NN queries to aid grid index growth, benefiting EWV creation of window queries. Similarly, the grid index is utilized to support NN query answering and EVR updating. We conduct several experiments for performance evaluation. The experimental results show that the proposed approach significantly outperforms the existing proxy-based approaches.


AML Efficient Approximate Membership Localization within a Web-based Join Framework 
ABSTRACT
In this paper, we propose a new type of Dictionary-based Entity Recognition Problem, named Approximate Membership Localization (AML). The popular Approximate Membership Extraction (AME) provides a full coverage to the true matched substrings from a given document, but many redundancies cause a low efficiency of the AME process and deteriorate the performance of real-world applications using the extracted substrings. 
The AML problem targets at locating non overlapped substrings which is a better approximation to the true matched substrings without generating overlapped redundancies. In order to perform AML efficiently, we propose the optimized algorithm P-Prune that prunes a large part of overlapped redundant matched substrings before generating them. 
Our study using several real-word data sets demonstrates the efficiency of P-Prune over a baseline method. We also study the AML in application to a proposed web-based join framework scenario which is a search-based approach joining two tables using dictionary-based entity recognition from web documents. The results not only prove the advantage of AML over AME, but also demonstrate the effectiveness of our search-based approach


Analysis of Distance-Based Location Management in Wireless Communication Networks
ABSTRACT
The performance of dynamic distance-based location management schemes (DBLMS) in wireless communication networks is analyzed. A Markov chain is developed as a mobility model to describe the movement of a mobile terminal in 2D cellular structures. The paging area residence time is characterized for arbitrary cell residence time by using the Markov chain. 
The expected number of paging area boundary crossings and the cost of the distance-based location update method are analyzed by using the classical renewal theory for two different call handling models. For the call plus location update model, two cases are considered. 
In the first case, the inter call time has an arbitrary distribution and the cell residence time has an exponential distribution. In the second case, the inter call time has a hyper-Erlang distribution and the cell residence time has an arbitrary distribution. 
For the call without location update model, both inter call time and cell residence time can have arbitrary distributions. Our analysis makes it possible to find the optimal distance threshold that minimizes the total cost of location management in a DBLMS


Anonymization of Centralized and Distributed Social Networks by Sequential Clustering
We study the problem of privacy-preservation in social networks. We consider the distributed setting in which the network data is split between several data holders. The goal is to arrive at an anonymized view of the unified network without revealing to any of the data holders information about links between nodes that are controlled by other data holders. 
To that end, we start with the centralized setting and offer two variants of an anonymization algorithm which is based on sequential clustering (Sq). Our algorithms significantly outperform the SaNGreeA algorithm due to Campan and Truta which is the leading algorithm for achieving anonymity in networks by means of clustering. 
We then devise secure distributed versions of our algorithms. To the best of our knowledge, this is the first study of privacy preservation in distributed social networks. We conclude by outlining future research proposals in that direction.


Cloud FTP: A Case Study of Migrating Traditional Applications to the Cloud
ABSTRACT: 
The cloud computing is growing rapidly for it offers on-demand computing power and capacity. The power of cloud enables dynamic scalability of applications facing various business requirements. However, challenges arise when considering the large amount of existing applications. 
In this work we propose to move the traditional FTP service to the cloud. We implement FTP service on Windows Azure Platform along with the auto-scaling cloud feature. Based on this, we implement a benchmark to measure the performance of our Cloud FTP. 
This case study illustrates the potential benefits and technical issues associated with the migration of the traditional applications to the clouds.


Crowd sourced Trace Similarity with Smartphones
Smartphones are nowadays equipped with a number of sensors, such as WiFi, GPS, accelerometers, etc. This capability allows smartphone users to easily engage in crowdsourced computing services, which contribute to the solution of complex problems in a distributed manner. 
In this work, we leverage such a computing paradigm to solve efficiently the following problem: comparing a query trace $(Q)$ against a crowd of traces generated and stored on distributed smartphones. 
Our proposed framework, coined $({\rm SmartTrace}^+)$, provides an effective solution without disclosing any part of the crowd traces to the query processor. $({\rm SmartTrace}^+)$, relies on an in-situ data storage model and intelligent top-K query processing algorithms that exploit distributed trajectory similarity measures, resilient to spatial and temporal noise, in order to derive the most relevant answers to $(Q)$. We evaluate our algorithms on both synthetic and real workloads. 
We describe our prototype system developed on the Android OS. The solution is deployed over our own SmartLab testbed of 25 smartphones. Our study reveals that computations over $({\rm SmartTrace}^+)$ result in substantial energy conservation; in addition, results can be computed faster than competitive approaches


Discovery and Verification of Neighbor Positions in Mobile Ad Hoc Networks
A growing number of ad hoc networking protocols and location-aware services require that mobile nodes learn the position of their neighbors. However, such a process can be easily abused or disrupted by adversarial nodes. In absence of a priori trusted nodes, the discovery and verification of neighbor positions presents challenges that have been scarcely investigated in the literature. 
In this paper, we address this open issue by proposing a fully distributed cooperative solution that is robust against independent and colluding adversaries, and can be impaired only by an overwhelming presence of adversaries. 
Results show that our protocol can thwart more than 99 percent of the attacks under the best possible conditions for the adversaries, with minimal false positive rates.


Distributed Web Systems Performance Forecasting using Turning Bands Method
ABSTRACT: 
Development of distributed computer systems (DCSs) in networked industrial and manufacturing applications on the World Wide Web (WWW) platform, including service-oriented architecture and Web of Things QoS-aware systems, it has become important to predict theWeb performance. 
In this paper, we presentWeb performance prediction in time and in space by making a forecast of a Web resource downloading using the Turning Bands (TB) geostatistical simulation method. 
Real-life data for the research were obtained in an active experiment conducted by our multi- agent measurement system MWING performing monitoring of a group of Web servers worldwide from agents localized in different geographical localizations in Poland. 
The results show good quality of Web performance prediction made by means of the TB method, especially in the case when European Web servers were monitored by an MWING agent localized in Gliwice, Poland


Dynamic Personalized Recommendation on Sparse Data
Abstract: Recommendation techniques are very important in the fields of E-commerce and other Web-based services. One of the main difficulties is dynamically providing high-quality recommendation on sparse data. In this paper, a novel dynamic personalized recommendation algorithm is proposed, in which information contained in both ratings and profile contents are utilized by exploring latent relations between ratings, a set of dynamic features are designed to describe user preferences in multiple phases, and finally a recommendation is made by adaptively weighting the features. Experimental results on public datasets show that the proposed algorithm has satisfying performance.
Nowadays the internet has become an indispensable part of our lives, and it provides a platform for enterprises to deliver information about products and services to the customers conveniently. As the amount of this kind of information is increasing rapidly, one great challenge is ensuring that proper content can be delivered quickly to the appropriate customers. Personalized recommendation is a desirable way to improve customer satisfaction and retention. 
There are mainly three approaches to recommendation engines based on different data analysis methods, i.e., rule-based, content-based and collaborative filtering. Among them, collaborative filtering (CF) requires only data about past user behavior like ratings, and its two main approaches are the neighborhood methods and latent factor models. 
The neighborhood methods can be user-oriented or item-oriented. They try to find like-minded users or similar items on the basis of co-ratings, and predict based on ratings of the nearest neighbors. Latent factor models try to learn latent factors from the pattern of ratings using techniques like matrix mfactorization and use the factors to compute the usefulness of items to users. CF has made great success and been proved to perform well in scenarios where user preferences are relatively stat


Evaluating Data Reliability an Evidential Answer with Application to a Web-Enabled Data Warehouse
ABSTRACT: 
There are many available methods to integrate information source reliability in an uncertainty representation, but there are only a few works focusing on the problem of evaluating this reliability. 
However, data reliability and confidence are essential components of a data warehousing system, as they influence subsequent retrieval and analysis. In this paper, we propose a generic method to assess data reliability from a set of criteria using the theory of belief functions. Customizable criteria and insightful decisions are provided.
The chosen illustrative example comes from real-world data issued from the Sym’Previus predictive microbiology oriented data warehouse


Exploiting Ubiquitous Data Collection for Mobile users in Wireless Sensor Networks
ABSTRACT: 
We study the ubiquitous data collection for mobile users in wireless sensor networks. People with handheld devices can easily interact with the network and collect data. We propose a novel approach for mobile users to collect the network-wide data. 
The routing structure of data collection is additively updated with the movement of the mobile user. With this approach, we only perform a limited modification to update the routing structure while the routing performance is bounded and controlled compared to the optimal performance. 
Our analysis shows that the proposed approach is scalable in maintenance overheads, performs efficiently in the routing performance, and provides continuous data delivery during the user movement. 
We implement the proposed protocol in a prototype system and test its feasibility and applicability by a 49-node testbed. We further conduct extensive simulations to examine the efficiency and scalability of our protocol with varied network settings.


Finding Rare Classes Active Learning with Generative and Discriminative Models 
ABSTRACT
Discovering rare categories and classifying new instances of them is an important data mining issue in many fields but fully supervised learning of a rare class classifier is prohibitively costly in labeling effort. There has therefore been increasing interest both in active discovery: to identify new classes quickly, and active learning: to train classifier with minimal supervision. 
These goals occur together in practice and are intrinsically related because examples of each class are required to train a classifier. Nevertheless, very few studies have tried to optimize them together, meaning that data mining for rare classes in new domains makes inefficient use of human supervision. 
Developing active learning algorithms to optimize both rare class discovery and classification simultaneously is challenging because discovery and classification have conflicting requirements in query criteria.
In this paper we address these issues with two contributions: a unified active learning model to jointly discover new categories and learn to classify them by adapting query criteria online; and a classifier combination algorithm that switches generative and discriminative classifiers as learning progresses. Extensive evaluation on a batch of standard UCI and vision datasets demonstrates the superiority of this approach over existing methods


Ranking on Data Manifold with Sink Points
ABSTRACT: 
Ranking is an important problem in various applications, such as Information Retrieval (IR), natural language processing, computational biology, and social sciences. Many ranking approaches have been proposed to rank objects according to their degrees of relevance or importance. Beyond these two goals, diversity has also been recognized as a crucial criterion in ranking. 
Top ranked results are expected to convey as little redundant information as possible, and cover as many aspects as possible. However, existing ranking approaches either take no account of diversity, or handle it separately with some heuristics. In this paper, we introduce a novel approach, Manifold Ranking with Sink Points (MRSPs), to address diversity as well as relevance and importance in ranking. 
Specifically, our approach uses a manifold ranking process over the data manifold, which can naturally find the most relevant and important data objects. Meanwhile, by turning ranked objects into sink points on data manifold, we can effectively prevent redundant objects from receiving a high rank. MRSP not only shows a nice convergence property, but also has an interesting and satisfying optimization explanation. 
We applied MRSP on two application tasks, update summarization and query recommendation, where diversity is of great concern in ranking. Experimental results on both tasks present a strong empirical performance of MRSP as compared to existing ranking approaches


Region-based Foldings in Process Discovery 
ABSTRACT: 
A central problem in the area of Process Mining is to obtain a formal model that represents the processes that are conducted in a system. If realized, this simple motivation allows for powerful techniques that can be used to formally analyze and optimize a system, without the need to resort to its semiformal and sometimes inaccurate specification. 
The problem addressed in this paper is known as Process Discovery: to obtain a formal model from a set of system executions. The theory of regions is a valuable tool in process discovery: it aims at learning a formal model (Petri nets) from a set of traces. On its genuine form, the theory is applied on an automaton and therefore one should convert the traces into an acyclic automaton in order to apply these techniques. 
Given that the complexity of the region-based techniques depends on the size of the input automata, revealing the underlying cycles and folding the initial automaton can incur in a significant complexity alleviation of the region-based techniques. In this paper, we follow this idea by incorporating region information in the cycle detection algorithm, enabling the identification of complex cycles that cannot be obtained efficiently with state-of-the-art techniques. 
The experimental results obtained by the devised tool suggest that the techniques presented in this paper are a big step into widening the application of the theory of regions in Process Mining for industrial scenarios.


Research in Progress - Defending Android Smartphones from Malware Attacks
Smart phones are becoming enriched with confidential information due to their powerful computational capabilities and attractive communications features. The Android smart phone is one of the most widely used platforms by businesses and users alike. This is partially because Android smart phones use the free, open-source Linux as the underlying operating system, which allows development of applications by any software developer. 
This research study aims to explore security risks associated with the use of Android smart phones and the sensitive information they contain, the researcher devised a survey questionnaire to investigate and further understand security threats targeting Android smart phones. 
The survey also intended to study the scope of malware attacks targeting Android phones and the effectiveness of existing defense measures. The study surveyed the average Android users as the target population to understand how they perceive security and what security controls they use to protect their smart phones


Scalable and Secure Sharing of Personal Health Records in Cloud Computing using Attribute based Encryption
Abstract:
Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. 
To assure the patients’ control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access and efficient user evocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. 
In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semi-trusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patient’s PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. 
A high degree of patient privacy is guaranteed simultaneously by exploiting multi-authority ABE. Our cheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user/attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability and efficiency of our proposed scheme


Secure Encounter-based Mobile Social Networks Requirements Designs and Tradeoffs
Abstract: Encounter-based social networks link users who share a location at the same time, as opposed to traditional social network paradigms of linking users who have an offline friendship. This approach presents fundamentally different challenges from those tackled by previous designs. In this paper, we explore functional and security requirements for these new systems, such as availability, security, and privacy, and present several design options for building secure encounter-based social networks. 
We examine one recently proposed encounter-based social network design and compare it to a set of idealized security and functionality requirements. We show that it is vulnerable to several attacks, including impersonation, collusion, and privacy breaching, even though it was designed specifically for security. 
Mindful of the possible pitfalls, we construct a flexible framework for secure encounter-based social networks, which can be used to construct networks that offer different security, privacy, and availability guarantees. We describe two example constructions derived from this framework, and consider each in terms of the ideal requirements. 
Some of our new designs fulfill more requirements in terms of system security, reliability, and privacy than previous work. We also evaluate real-world performance of one of our designs by implementing a proof-of-concept iPhone application called MeetUp. Experiments highlight the potential of our system.


Security Analysis of a Single Sign-On Mechanism for Distributed Computer Networks 
ABSTRACT: 
In this paper, however, we demonstrative that their scheme is actually insecure as it fails to meet credential privacy and soundness of authentication. Specifically, we present two impersonation attacks. 
The first attack allows a malicious service provider, who has successfully communicated with a legal user twice, to recover the user’s credential and then to impersonate the user to access resources and services offered by other service providers. In another attack, an outsider without any credential may be able to enjoy network services freely by impersonating any legal user or a nonexistent user. 
We identify the flaws in their security arguments to explain why attacks are possible against their SSO scheme. Our attacks also apply to another SSO scheme proposed by Hsu and Chuang, which inspired the design of the Chang–Lee scheme. Moreover, by employing an efficient verifiable encryption of RSA signatures proposed by Ateniese, we propose an improvement for repairing the Chang–Lee scheme.


SPOC: A Secure and Privacy-Preserving Opportunistic Computing Framework for Mobile-Healthcare Emergency
With the pervasiveness of smart phones and the advance of wireless body sensor networks (BSNs), mobile Healthcare (m-Healthcare), which extends the operation of Healthcare provider into a pervasive environment for better health monitoring, has attracted considerable interest recently. 
However, the flourish of m-Healthcare still faces many challenges including information security and privacy preservation. In this paper, we propose a secure and privacy-preserving opportunistic computing framework, called SPOC, for m-Healthcare emergency. With SPOC, smart phone resources including computing power and energy can be opportunistically gathered to process the computing-intensive personal health information (PHI) during m-Healthcare emergency with minimal privacy disclosure. 
In specific, to leverage the PHI privacy disclosure and the high reliability of PHI process and transmission in m-Healthcare emergency, we introduce an efficient user-centric privacy access control in SPOC framework, which is based on an attribute-based access control and a new privacy-preserving scalar product computation (PPSPC) technique, and allows a medical user to decide who can participate in the opportunistic computing to assist in processing his overwhelming PHI data. 
Detailed security analysis shows that the proposed SPOC framework can efficiently achieve user-centric privacy access control in m-Healthcare emergency. In addition, performance evaluations via extensive simulations demonstrate the SPOC's effectiveness in term of providing high-reliable-PHI process and transmission while minimizing the privacy disclosure during m-Healthcare emergency.


SSD A Robust RF Location Fingerprint Addressing Mobile Devices’ Heterogeneity 
ABSTRACT: Fingerprint-based methods are widely adopted for indoor localization purpose because of their cost-effectiveness compared to other infrastructure-based positioning systems. However, the popular location fingerprint, Received Signal Strength (RSS), is observed to differ significantly across different devices' hardware even under the same wireless conditions. 
We derive analytically a robust location fingerprint definition, the Signal Strength Difference (SSD), and verify its performance experimentally using a number of different mobile devices with heterogeneous hardware. Our experiments have also considered both Wi-Fi and Bluetooth devices, as well as both Access- Point (AP)-based localization and Mobile-Node (MN)-assisted localization. 
We present the results of two well-known localization algorithms (K Nearest Neighbor and Bayesian Inference) when our proposed fingerprint is used, and demonstrate its robustness when the testing device differs from the training device. 
We also compare these SSD-based localization algorithms' performance against that of two other approaches in the literature that are designed to mitigate the effects of mobile node hardware variations, and show that SSD-based algorithms have better accuracy


Target Tracking and Mobile Sensor Navigation in Wireless Sensor Networks 
ABSTRACT: This work studies the problem of tracking signal-emitting mobile targets using navigated mobile sensors based on signal reception. Since the mobile target’s maneuver is unknown, the mobile sensor controller utilizes the measurement collected by a wireless sensor network in terms of the mobile target signal’s time of arrival (TOA). 
The mobile sensor controller acquires the TOA measurement information from both the mobile target and the mobile sensor for estimating their locations before directing the mobile sensor’s movement to follow the target. We propose a min-max approximation approach to estimate the location for tracking which can be efficiently solved via semidefinite programming (SDP) relaxation, and apply a cubic function for mobile sensor navigation. 
We estimate the location of the mobile sensor and target jointly to improve the tracking accuracy. To further improve the system performance, we propose a weighted tracking algorithm by using the measurement information more efficiently. Our results demonstrate that the proposed algorithm provides good tracking performance and can quickly direct the mobile sensor to follow the mobile target


T-Drive Enhancing Driving Directions with Taxi Drivers’  Intelligence
ABSTRACT: This paper presents a smart driving direction system leveraging the intelligence of experienced drivers. In this system, GPS-equipped taxis are employed as mobile sensors probing the traffic rhythm of a city and taxi drivers’ intelligence in choosing driving directions in the physical world. 
We propose a time-dependent landmark graph to model the dynamic traffic pattern as well as the intelligence of experienced drivers so as to provide a user with the practically fastest route to a given destination at a given departure time. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. 
Based on this graph, we design a two-stage routing algorithm to compute the practically fastest and customized route for end users. We build our system based on a real-world trajectory data set generated by over 33,000 taxis in a period of three months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. 
As a result, 60- 70 percent of the routes suggested by our method are faster than the competing methods, and 20 percent of the routes share the same results. On average, 50 percent of our routes are at least 20 percent faster than the competing approaches


Toward Privacy Preserving and Collusion Resistancein a Location Proof Updating System
ABSTRACT:
Todays location-sensitive service relies on users mobile device to determine the current location. This allows malicious users to access a restricted resource or provide bogus alibis by cheating on their locations. 
To address this issue, we propose A Privacy-Preserving Location proof Updating System (APPLAUS) in which colocated Bluetooth enabled mobile devices mutually generate location proofs and send updates to a location proof server. Periodically changed pseudonyms are used by the mobile devices to protect source location privacy from each other, and from the untrusted location proof server.
We develop user-centric location privacy model in which individual users evaluate their location privacy levels and decide whether and when to accept the location proof requests. In order to defend against colluding attacks, we also present betweenness ranking-based and correlation clustering-based approaches for outlier detection. 
APPLAUS can be implemented with existingnetwork infrastructure, and can be easily deployed in Bluetooth enabled mobile devices with little computation or power cost. Extensive experimental results show that APPLAUS can effectively provide location proofs, significantly preserve the source location privacy, and effectively detect colluding attacks.




FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


EMBEDDED SYSTEM PROJECTS IN
Embedded Systems using Microcontrollers, VLSI, DSP, Matlab, Power Electronics, Power Systems, Electrical
For Embedded Projects - 044-45000083, 7418497098 
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or

Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…




Android Project Titles, Android Project Abstracts, Android IEEE Project Abstracts, Android Projects abstracts for CSE IT MCA, Download Android Titles, Download Android Project Abstracts, Download IEEE Android Abstracts

Tuesday, July 2, 2013

Matlab Project Titles, Matlab Project Abstracts, Matlab IEEE Project Abstracts, Matlab Projects abstracts for CSE IT ECE EEE MCA, Download Matlab Titles, Download Matlab Project Abstracts, Download IEEE Matlab Abstracts

MATLAB PROJECTS - ABSTRACTS
A Watermarking Based Medical Image Integrity Control System and an Image Moment Signature for Tampering Characterization
In this paper, we present a medical image integrity verification system to detect and approximate local malevolent image alterations (e.g. removal or addition of lesions) as well as identifying the nature of a global processing an image may have undergone (e.g. lossy compression, filtering). 
The proposed integrity analysis process is based on non significant region watermarking with signatures extracted from different pixel blocks of interest and which are compared with the recomputed ones at the verification stage. A set of three signatures is proposed. 
The two firsts devoted to detection and modification location are cryptographic hashes and checksums, while the last one is issued from the image moment theory. In this paper, we first show how geometric moments can be used to approximate any local modification by its nearest generalized 2D Gaussian. 
We then demonstrate how ratios between original and recomputed geometric moments can be used as image features in a classifier based strategy in order to determine the nature of a global image processing. 
Experimental results considering both local and global modifications in MRI and retina images illustrate the overall performances of our approach. With a pixel block signature of about 200 bit long, it is possible to detect, to roughly localize and to get an idea about the image tamper.


A Hybrid Multiview Stereo Algorithm for Modeling Urban Scenes
We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). 
We adopt a two-step strategy consisting first in segmenting the initial mesh-based surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. 
The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). 
The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.


Adaptive fingerprint image enhancement with emphasis on preprocessing of data.
Abstract
This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. 
Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. 
Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. 
These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.


Airborne Vehicle Detection in Dense Urban Areas Using HoG Features and Disparity Maps 
Vehicle detection has been an important research field for years as there are a lot of valuable applications, ranging from support of traffic planners to real-time traffic management. Especially detection of cars in dense urban areas is of interest due to the high traffic volume and the limited space. In city areas many car-like objects (e.g., dormers) appear which might lead to confusion. 
Additionally, the inaccuracy of road databases supporting the extraction process has to be handled in a proper way. This paper describes an integrated real-time processing chain which utilizes multiple occurrence of objects in images. At least two subsequent images, data of exterior orientation, a global DEM, and a road database are used as input data. 
The segments of the road database are projected in the non-geocoded image using the corresponding height information from the global DEM. From amply masked road areas in both images a disparity map is calculated. This map is used to exclude elevated objects above a certain height (e.g., buildings and vegetation). 
Additionally, homogeneous areas are excluded by a fast region growing algorithm. Remaining parts of one input image are classified based on the ‘Histogram of oriented Gradients (HoG)’ features. The implemented approach has been verified using image sections from two different flights and manually extracted ground truth data from the inner city of Munich. The evaluation shows a quality of up to 70 percent.


An Optimized Wavelength Band Selection for Heavily Pigmented Iris Recognition 
Commercial iris recognition systems usually acquire images of the eye in 850-nm band of the electromagnetic spectrum. In this work, the heavily pigmented iris images are captured at 12 wavelengths, from 420 to 940 nm. 
The purpose is to find the most suitable wavelength band for the heavily pigmented iris recognition. A multispectral acquisition system is first designed for imaging the iris at narrow spectral bands in the range of 420-940 nm. Next, a set of 200 human black irises which correspond to the right and left eyes of 100 different subjects are acquired for an analysis. 
Finally, the most suitable wavelength for heavily pigmented iris recognition is found based on two approaches: 1) the quality assurance of texture; 2) matching performance-equal error rate (EER) and false rejection rate (FRR). 
This result is supported by visual observations of magnified detailed local iris texture information. The experimental results suggest that there exists a most suitable wavelength band for heavily pigmented iris recognition when using a single band of wavelength as illumination.


Analysis Operator Learning and Its Application to Image Reconstruction
Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. 
An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be the sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. 
In this work, we present an algorithm for learning an analysis operator from training images. Our method is based on an $\ell_p$-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. 
Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques.
  

Atmospheric Turbulence Mitigation Using Complex Wavelet-Based Fusion 
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). 
In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. 
We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full- and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios.
  

Automatic Detection and Reconstruction of Building Radar Footprints From Single VHR SAR Images 
The spaceborne synthetic aperture radar (SAR) systems Cosmo-SkyMed, TerraSAR-X, and TanDEM-X acquire imagery with very high spatial resolution (VHR), supporting various important application scenarios, such as damage assessment in urban areas after natural disasters. To ensure a reliable, consistent, and fast extraction of the information from the complex SAR scenes, automatic information extraction methods are essential. Focusing on the analysis of urban areas, which is of prime interest of VHR SAR, in this paper, we present a novel method for the automatic detection and 2-D reconstruction of building radar footprints from VHR SAR scenes. 
Unlike most of the literature methods, the proposed approach can be applied to single images. The method is based on the extraction of a set of low-level features from the images and on their composition to more structured primitives using a production system. Then, the concept of semantic meaning of the primitives is introduced and used for both the generation of building candidates and the radar footprint reconstruction. 
The semantic meaning represents the probability that a primitive belongs to a certain scattering class (e.g., double bounce, roof, facade) and has been defined in order to compensate for the lack of detectable features in single images. Indeed, it allows the selection of the most reliable primitives and footprint hypotheses on the basis of fuzzy membership grades. 
The efficiency of the proposed method is demonstrated by processing a 1-m resolution TerraSAR-X spotbeam scene containing flat- and gable-roof buildings at various settings. The results show that the method has a high overall detection rate and that radar footprints are well reconstructed, in particular for medium and large buildings.
  

Compressive Framework for Demosaicing of Natural Images 
Typical consumer digital cameras sense only one out of three color components per image pixel. The problem of demosaicing deals with interpolating those missing color components. In this paper, we present compressive demosaicing (CD), a framework for demosaicing natural images based on the theory of compressed sensing (CS). 
Given sensed samples of an image, CD employs a CS solver to find the sparse representation of that image under a fixed sparsifying dictionary Ψ. As opposed to state of the art CS-based demosaicing approaches, we consider a clear distinction between the interchannel (color) and interpixel correlations of natural images. 
Utilizing some well-known facts about the human visual system, those two types of correlations are utilized in a nonseparable format to construct the sparsifying transform Ψ. Our simulation results verify that CD performs better (both visually and in terms of PSNR) than leading demosaicing approaches when applied to the majority of standard test images.


Context-Based Hierarchical Unequal Merging for SAR Image Segmentation 
This paper presents an image segmentation method named Context-based Hierarchical Unequal Merging for Synthetic aperture radar (SAR) Image Segmentation (CHUMSIS), which uses superpixels as the operation units instead of pixels. 
Based on the Gestalt laws, three rules that realize a new and natural way to manage different kinds of features extracted from SAR images are proposed to represent superpixel context. The rules are prior knowledge from cognitive science and serve as top-down constraints to globally guide the superpixel merging. 
The features, including brightness, texture, edges, and spatial information, locally describe the superpixels of SAR images and are bottom-up forces. While merging superpixels, a hierarchical unequal merging algorithm is designed, which includes two stages: 1) coarse merging stage and 2) fine merging stage. 
The merging algorithm unequally allocates computation resources so as to spend less running time in the superpixels without ambiguity and more running time in the superpixels with ambiguity. Experiments on synthetic and real SAR images indicate that this algorithm can make a balance between computation speed and segmentation accuracy. Compared with two state-of-the-art Markov random field models, CHUMSIS can obtain good segmentation results and successfully reduce running time.


Discrete wavelet transform and data expansion reduction in homomorphic encrypted domain.
Abstract
Signal processing in the encrypted domain is a new technology with the goal of protecting valuable signals from insecure signal processing. In this paper, we propose a method for implementing discrete wavelet transform (DWT) and multiresolution analysis (MRA) in homomorphic encrypted domain. 
We first suggest a framework for performing DWT and inverse DWT (IDWT) in the encrypted domain, then conduct an analysis of data expansion and quantization errors under the framework. To solve the problem of data expansion, which may be very important in practical applications, we present a method for reducing data expansion in the case that both DWT and IDWT are performed. With the proposed method, multilevel DWT/IDWT can be performed with less data expansion in homomorphic encrypted domain. 
We propose a new signal processing procedure, where the multiplicative inverse method is employed as the last step to limit the data expansion. Taking a 2-D Haar wavelet transform as an example, we conduct a few experiments to demonstrate the advantages of our method in secure image processing. 
We also provide computational complexity analyses and comparisons. To the best of our knowledge, there has been no report on the implementation of DWT and MRA in the encrypted domain.
  

Estimating Information from Image Colors: An Application to Digital Cameras and Natural Scenes 
The colors present in an image of a scene provide information about its constituent elements. But the amount of information depends on the imaging conditions and on how information is calculated. This work had two aims. The first was to derive explicitly estimators of the information available and the information retrieved from the color values at each point in images of a scene under different illuminations. 
The second was to apply these estimators to simulations of images obtained with five sets of sensors used in digital cameras and with the cone photoreceptors of the human eye. Estimates were obtained for 50 hyperspectral images of natural scenes under daylight illuminants with correlated color temperatures 4,000, 6,500, and 25,000 K. Depending on the sensor set, the mean estimated information available across images with the largest illumination difference varied from 15.5 to 18.0 bits and the mean estimated information retrieved after optimal linear processing varied from 13.2 to 15.5 bits (each about 85 percent of the corresponding information available). 
With the best sensor set, 390 percent more points could be identified per scene than with the worst. Capturing scene information from image colors depends crucially on the choice of camera sensors.


General Constructions for Threshold Multiple-Secret Visual Cryptographic Schemes 
A conventional threshold (k out of n) visual secret sharing scheme encodes one secret image P into n transparencies (called shares) such that any group of k transparencies reveals P when they are superimposed, while that of less than k ones cannot. 
We define and develop general constructions for threshold multiple-secret visual cryptographic schemes (MVCSs) that are capable of encoding s secret images P1,P2,...,Ps into n shares such that any group of less than k shares obtains none of the secrets, while 1) each group of k, k+1,..., n shares reveals P1, P2, ..., Ps, respectively, when superimposed, referred to as (k, n, s)-MVCS where s=n-k+1; or 2) each group of u shares reveals P(ru) where ru ∈ {0,1,2,...,s} (ru=0 indicates no secret can be seen), k ≤ u ≤ n and 2 ≤ s ≤ n-k+1, referred to as (k, n, s, R)-MVCS in which R=(rk, rk+1, ..., rn) is called the revealing list. 
We adopt the skills of linear programming to model (k, n, s) - and (k, n, s, R) -MVCSs as integer linear programs which minimize the pixel expansions under all necessary constraints. The pixel expansions of different problem scales are explored, which have never been reported in the literature. Our constructions are novel and flexible. They can be easily customized to cope with various kinds of MVCSs.


General framework to histogram-shifting-based reversible data hiding.
Abstract
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. 
Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. 
It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
  

Hyperspectral Imagery Restoration Using Nonlocal Spectral-Spatial Structured Sparse Representation With Noise Estimation 
Noise reduction is an active research area in image processing due to its importance in improving the quality of image for object detection and classification. In this paper, we develop a sparse representation based noise reduction method for hyperspectral imagery, which is dependent on the assumption that the non-noise component in an observed signal can be sparsely decomposed over a redundant dictionary while the noise component does not have this property. T
he main contribution of the paper is in the introduction of nonlocal similarity and spectral-spatial structure of hyperspectral imagery into sparse representation. Non-locality means the self-similarity of image, by which a whole image can be partitioned into some groups containing similar patches. The similar patches in each group are sparsely represented with a shared subset of atoms in a dictionary making true signal and noise more easily separated. 
Sparse representation with spectral-spatial structure can exploit spectral and spatial joint correlations of hyperspectral imagery by using 3-D blocks instead of 2-D patches for sparse coding, which also makes true signal and noise more distinguished. Moreover, hyperspectral imagery has both signal-independent and signal-dependent noises, so a mixed Poisson and Gaussian noise model is used. 
In order to make sparse representation be insensitive to the various noise distribution in different blocks, a variance-stabilizing transformation (VST) is used to make their variance comparable. The advantages of the proposed methods are validated on both synthetic and real hyperspectral remote sensing data sets.


Image size invariant visual cryptography for general access structures subject to display quality constraints.
Abstract
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. 
This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. 
We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers. 


Interactive Segmentation for Change Detection in Multispectral Remote-Sensing Images 
In this letter, we propose to solve the change detection (CD) problem in multitemporal remote-sensing images using interactive segmentation methods. The user needs to input markers related to change and no-change classes in the difference image. 
Then, the pixels under these markers are used by the support vector machine classifier to generate a spectral-change map. To enhance further the result, we include the spatial contextual information in the decision process using two different solutions based on Markov random field and level-set methods. 
While the former is a region-driven method, the latter exploits both region and contour for performing the segmentation task. Experiments conducted on a set of four real remote-sensing images acquired by low as well as very high spatial resolution sensors and referring to different kinds of changes confirm the attractive capabilities of the proposed methods in generating accurate CD maps with simple and minimal interaction.


Intra-and-Inter-Constraint-Based Video Enhancement Based on Piecewise Tone Mapping 
Video enhancement plays an important role in various video applications. In this paper, we propose a new intra-and-inter-constraint-based video enhancement approach aiming to: 1) achieve high intraframe quality of the entire picture where multiple regions-of-interest (ROIs) can be adaptively and simultaneously enhanced, and 2) guarantee the interframe quality consistencies among video frames. 
We first analyze features from different ROIs and create a piecewise tone mapping curve for the entire frame such that the intraframe quality can be enhanced. We further introduce new interframe constraints to improve the temporal quality consistency. 
Experimental results show that the proposed algorithm obviously outperforms the state-of-the-art algorithms.


Latent Fingerprint Matching Using Descriptor-Based Hough Transform 
Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. 
Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem. 
Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information. 
To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications. 
Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy.


LDFT-Based Watermarking Resilient to Local Desynchronization Attacks 
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. 
In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. 
Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.


Linear Distance Coding for Image Classification 
The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification. 
In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed. 
These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy. 
We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.


Local Directional Number Pattern for Face Analysis: Face and Expression Recognition 
This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods. 
We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions. 
We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations. 
Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks.


Noise Reduction Based on Partial-Reference, Dual-Tree Complex Wavelet Transform Shrinkage 
This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution. 
To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image. 
Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized. 
The result is a map of the directional structures present in the non-enhanced image. Said map is then used to shrink the coefficients of the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced image is computed via the inverse transforms. A thorough numerical analysis of the results has been performed in order to confirm the validity of the proposed approach.


Query-Adaptive Image Search With Hash Codes
ABSTRACT:
Scalable image search based on visual similarity has been an active topic of research in recent years. State-of-the-art solutions often use hashing methods to embed high-dimensional image features into Hamming space, where search can be performed in real-time based on Hamming distance of compact hash codes. 
Unlike traditional metrics (e.g., Euclidean) that offer continuous distances, the Hamming distances are discrete integer values. As a consequence, there are often a large number of images sharing equal Hamming distances to a query, which largely hurts search results where fine-grained ranking is very important. 
This paper introduces an approach that enables query-adaptive ranking of the returned images with equal Hamming distances to the queries. This is achieved by firstly offline learning bitwise weights of the hash codes for a diverse set of predefined semantic concept classes. 
We formulate the weight learning process as a quadratic programming problem that minimizes intra-class distance while preserving inter-class relationship captured by original raw image features. Query-adaptive weights are then computed online by evaluating the proximity between a query and the semantic concept classes. 
With the query-adaptive bitwise weights, returned images can be easily ordered by weighted Hamming distance at a finer-grained hash code level rather than the original Hamming distance level. Experiments on a Flickr image dataset show clear improvements from our proposed approach.


Regional Spatially Adaptive Total Variation Super-Resolution with Spatial Information Filtering and Clustering
Total variation is used as a popular and effective image prior model in the regularization-based image processing fields. However, as the total variation model favors a piecewise constant solution, the processing result under high noise intensity in the flat regions of the image is often poor, and some pseudoedges are produced. 
In this paper, we develop a regional spatially adaptive total variation model. Initially, the spatial information is extracted based on each pixel, and then two filtering processes are added to suppress the effect of pseudoedges. In addition, the spatial information weight is constructed and classified with k-means clustering, and the regularization strength in each region is controlled by the clustering center value. 
The experimental results, on both simulated and real datasets, show that the proposed approach can effectively reduce the pseudoedges of the total variation regularization in the flat regions, and maintain the partial smoothness of the high-resolution image. 
More importantly, compared with the traditional pixel-based spatial information adaptive approach, the proposed region-based spatial information adaptive total variation model can better avoid the effect of noise on the spatial information extraction, and maintains robustness with changes in the noise intensity in the super-resolution process.


Reversible Data Hiding With Optimal Value Transfer 
In reversible data hiding techniques, the values of host data are modified according to some particular rules and the original host content can be perfectly restored after extraction of the hidden data on receiver side. In this paper, the optimal rule of value modification under a payload-distortion criterion is found by using an iterative procedure, and a practical reversible data hiding scheme is proposed. 
The secret data, as well as the auxiliary information used for content recovery, are carried by the differences between the original pixel-values and the corresponding values estimated from the neighbors. Here, the estimation errors are modified according to the optimal value transfer rule. 
Also, the host image is divided into a number of pixel subsets and the auxiliary information of a subset is always embedded into the estimation errors in the next subset. A receiver can successfully extract the embedded secret data and recover the original content in the subsets with an inverse order. This way, a good reversible data hiding performance is achieved.


Reversible Watermarking Based on Invariant Image Classification and Dynamic Histogram Shifting 
In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so. 
Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion. 
In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction. The experiments conducted so far, on some natural images and on medical images from different modalities, show that for capacities smaller than 0.4 bpp, our method can insert more data with lower distortion than any existing schemes. For the same capacity, we achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang , the most efficient approach actually.


Rich Intrinsic Image Decomposition of Outdoor Scenes from Multiple Views
Intrinsic images aim at separating an image into reflectance and illumination layers to facilitate analysis or manipulation. 
Most successful methods rely on user indications [Bousseau et al. 2009], precise geometry, or need multiple images from the same viewpoint and varying lighting to solve this severely ill-posed problem. 
We propose a method to estimate intrinsic images from multiple views of an outdoor scene at a single time of day without the need for precise geometry and with only a simple manual calibration step.


Robust Face Recognition for Uncontrolled Pose and Illumination Changes
Face recognition has made significant advances in the last decade, but robust commercial applications are still lacking. Current authentication/identification applications are limited to controlled settings, e.g., limited pose and illumination changes, with the user usually aware of being screened and collaborating in the process. 
Among others, pose and illumination changes are limited. To address challenges from looser restrictions, this paper proposes a novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE). Its robustness comes from normalization (“correction”) strategies to address pose and illumination variations. 
In addition, two separate image quality indices quantitatively assess pose and illumination changes for each biometric query, before submitting it to the classifier. Samples with poor quality are possibly discarded or undergo a manual classification or, when possible, trigger a new capture. After such filter, template similarity for matching purposes is measured using a localized version of the image correlation index. 
Finally, FACE adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier. Experimental results show that the accuracy of FACE (in terms of recognition rate) compares favorably, and in some cases by significant margins, against popular face recognition methods. In particular, FACE is compared against SVM, incremental SVM, principal component analysis, incremental LDA, ICA, and hierarchical multiscale local binary pattern. 
Testing exploits data from different data sets: CelebrityDB, Labeled Faces in the Wild, SCface, and FERET. The face images used present variations in pose, expression, illumination, image quality, and resolution. 
Our experiments show the benefits of using image quality and reliability indices to enhance overall accuracy, on one side, and to provide for indi- idualized processing of biometric probes for better decision-making purposes, on the other side. 
Both kinds of indices, owing to the way they are defined, can be easily integrated within different frameworks and off-the-shelf biometric applications for the following: 1) data fusion; 2) online identity management; and 3) interoperability. The results obtained by FACE witness a significant increase in accuracy when compared with the results produced by the other algorithms considered.


Robust Hashing for Image Authentication Using Zernike Moments and Local Features 
A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area. Both global and local features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image as a whole. 
The local features include position and texture information of salient regions in the image. Secret keys are introduced in feature extraction and hash construction. While being robust against content-preserving image processing, the hash is sensitive to malicious tampering and, therefore, applicable to image authentication. 
The hash of a test image is compared with that of a reference image. When the hash distance is greater than a threshold Ï„1 and less than Ï„2, the received image is judged as a fake. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images approaches zero. Experimental results are presented to show effectiveness of the method.


Scene Text Detection via Connected Component Clustering and Nontext Filtering 
In this paper, we present a new scene text detection algorithm based on two machine learning classifiers: one allows us to generate candidate word regions and the other filters out nontext ones. To be precise, we extract connected components (CCs) in images by using the maximally stable extremal region algorithm. 
These extracted CCs are partitioned into clusters so that we can generate candidate regions. Unlike conventional methods relying on heuristic rules in clustering, we train an AdaBoost classifier that determines the adjacency relationship and cluster CCs by using their pairwise relations. 
Then we normalize candidate word regions and determine whether each region contains text or not. Since the scale, skew, and color of each candidate can be estimated from CCs, we develop a text/nontext classifier for normalized images. This classifier is based on multilayer perceptrons and we can control recall and precision rates with a single free parameter. 
Finally, we extend our approach to exploit multichannel information. Experimental results on ICDAR 2005 and 2011 robust reading competition datasets show that our method yields the state-of-the-art performance both in speed and accuracy.


Secure Watermarking for Multimedia Content Protection: A Review of its Benefits and Open Issues 
The paper illustrates recent results regarding secure watermarking to the signal processing community, highlighting both benefits and still open issues. Secure signal processing, by which indicates a set of techniques able to process sensitive signals that have been obfuscated either by encryption or by other privacy-preserving primitives, may offer valuable solutions to the aforementioned issues. 
More specifically, the adoption of efficient methods for watermark embedding or detection on data that have been secured in some way, which we name in short secure watermarking, provides an elegant way to solve the security concerns of fingerprinting applications.





FOR MORE ABSTRACTS, IEEE BASE PAPER / REFERENCE PAPERS AND NON IEEE PROJECT ABSTRACTS

CONTACT US
No.109, 2nd Floor, Bombay Flats, Nungambakkam High Road, Nungambakkam, Chennai - 600 034
Near Ganpat Hotel, Above IOB, Next to ICICI Bank, Opp to Cakes'n'Bakes
044-2823 5816, 98411 93224, 89393 63501
ncctchennai@gmail.com, ncctprojects@gmail.com 


EMBEDDED SYSTEM PROJECTS IN
Embedded Systems using Microcontrollers, VLSI, DSP, Matlab, Power Electronics, Power Systems, Electrical
For Embedded Projects - 044-45000083, 7418497098 
ncctchennai@gmail.com, www.ncct.in


Project Support Services
Complete Guidance | 100% Result for all Projects | On time Completion | Excellent Support | Project Completion Experience Certificate | Free Placements Services | Multi Platform Training | Real Time Experience


TO GET ABSTRACTS / PDF Base Paper / Review PPT / Other Details
Mail your requirements / SMS your requirements / Call and get the same / Directly visit our Office


WANT TO RECEIVE FREE PROJECT DVD...
Want to Receive FREE Projects Titles, List / Abstracts  / IEEE Base Papers DVD… Walk in to our Office and Collect the same Or

Send your College ID scan copy, Your Mobile No & Complete Postal Address, Mentioning you are interested to Receive DVD through Courier at Free of Cost


Own Projects
Own Projects ! or New IEEE Paper… Any Projects…
Mail your Requirements to us and Get is Done with us… or Call us / Email us / SMS us or Visit us Directly

We will do any Projects…




Matlab Project Titles, Matlab Project Abstracts, Matlab IEEE Project Abstracts, Matlab Projects abstracts for CSE IT MCA, Download Matlab Titles, Download Matlab Project Abstracts, Download IEEE Matlab Abstracts