International Science Index
International Journal of Computer, Electrical, Automation, Control and Information Engineering
Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.
Data Quality Enhancement with String Length Distribution
Recently, collectable manufacturing data are rapidly
increasing. On the other hand, mega recall is getting serious as
a social problem. Under such circumstances, there are increasing
needs for preventing mega recalls by defect analysis such as
root cause analysis and abnormal detection utilizing manufacturing
data. However, the time to classify strings in manufacturing data
by traditional method is too long to meet requirement of quick
defect analysis. Therefore, we present String Length Distribution
Classification method (SLDC) to correctly classify strings in a short
time. This method learns character features, especially string length
distribution from Product ID, Machine ID in BOM and asset list.
By applying the proposal to strings in actual manufacturing data, we
verified that the classification time of strings can be reduced by 80%.
As a result, it can be estimated that the requirement of quick defect
analysis can be fulfilled.
Hierarchical Checkpoint Protocol in Data Grids
Grid of computing nodes has emerged as a
representative means of connecting distributed computers or
resources scattered all over the world for the purpose of computing
and distributed storage. Since fault tolerance becomes complex due
to the availability of resources in decentralized grid environment,
it can be used in connection with replication in data grids. The
objective of our work is to present fault tolerance in data grids
with data replication-driven model based on clustering. The
performance of the protocol is evaluated with Omnet++ simulator.
The computational results show the efficiency of our protocol in
terms of recovery time and the number of process in rollbacks.
Complex Fuzzy Evolution Equation with Nonlocal Conditions
The objective of this paper is to study the existence and
uniqueness of Mild solutions for a complex fuzzy evolution equation
with nonlocal conditions that accommodates the notion of fuzzy sets
defined by complex-valued membership functions. We first propose
definition of complex fuzzy strongly continuous semigroups. We then
give existence and uniqueness result relevant to the complex fuzzy
Management Software for the Elaboration of an Electronic File in the Pharmaceutical Industry Following Mexican Regulations
For certification, certain goods of public interest, such as medicines and food, it is required the preparation and delivery of a dossier. For its elaboration, legal and administrative knowledge must be taken, as well as organization of the documents of the process, and an order that allows the file verification. Therefore, a virtual platform was developed to support the process of management and elaboration of the dossier, providing accessibility to the information and interfaces that allow the user to know the status of projects. The development of dossier system on the cloud allows the inclusion of the technical requirements for the software management, including the validation and the manufacturing in the field industry. The platform guides and facilitates the dossier elaboration (report, file or history), considering Mexican legislation and regulations, it also has auxiliary tools for its management. This technological alternative provides organization support for documents and accessibility to the information required to specify the successful development of a dossier. The platform divides into the following modules: System control, catalog, dossier and enterprise management. The modules are designed per the structure required in a dossier in those areas. However, the structure allows for flexibility, as its goal is to become a tool that facilitates and does not obstruct processes. The architecture and development of the software allows flexibility for future work expansion to other fields, this would imply feeding the system with new regulations.
Efficient Filtering of Graph Based Data Using Graph Partitioning
An algebraic framework for processing graph signals
axiomatically designates the graph adjacency matrix as the shift
operator. In this setup, we often encounter a problem wherein we
know the filtered output and the filter coefficients, and need to
find out the input graph signal. Solution to this problem using
direct approach requires O(N3) operations, where N is the number
of vertices in graph. In this paper, we adapt the spectral graph
partitioning method for partitioning of graphs and use it to reduce
the computational cost of the filtering problem. We use the example
of denoising of the temperature data to illustrate the efficacy of the
Visual Search Based Indoor Localization in Low Light via RGB-D Camera
Most of traditional visual indoor navigation algorithms
and methods only consider the localization in ordinary daytime, while
we focus on the indoor re-localization in low light in the paper. As
RGB images are degraded in low light, less discriminative infrared
and depth image pairs are taken, as the input, by RGB-D cameras, the
most similar candidates, as the output, are searched from databases
which is built in the bag-of-word framework. Epipolar constraints can
be used to relocalize the query infrared and depth image sequence.
We evaluate our method in two datasets captured by Kinect2. The
results demonstrate very promising re-localization results for indoor
navigation system in low light environments.
A Robust Hybrid Blind Digital Image Watermarking System Using Discrete Wavelet Transform and Contourlet Transform
In this paper, a hybrid blind digital watermarking system using Discrete Wavelet Transform (DWT) and Contourlet Transform (CT) has been implemented and tested. The implemented combined digital watermarking system has been tested against five common types of image attacks. The performance evaluation shows improved results in terms of imperceptibility, robustness, and high tolerance against these attacks; accordingly, the system is very effective and applicable.
Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System
In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.
Signal Processing Approach to Study Multifractality and Singularity of Solar Wind Speed Time Series
This paper investigates the nature of the fluctuation of the daily average Solar wind speed time series collected over a period of 2492 days, from 1st January, 1997 to 28th October, 2003. The degree of self-similarity and scalability of the Solar Wind Speed signal has been explored to characterise the signal fluctuation. Multi-fractal Detrended Fluctuation Analysis (MFDFA) method has been implemented on the signal which is under investigation to perform this task. Furthermore, the singularity spectra of the signals have been also obtained to gauge the extent of the multifractality of the time series signal.
Standard Languages for Creating a Database to Display Financial Statements on a Web Application
XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.
Analytics Model in a Telehealth Center Based on Cloud Computing and Local Storage
Some of the main goals about telecare such as monitoring, treatment, telediagnostic are deployed with the integration of applications with specific appliances. In order to achieve a coherent model to integrate software, hardware, and healthcare systems, different telehealth models with Internet of Things (IoT), cloud computing, artificial intelligence, etc. have been implemented, and their advantages are still under analysis. In this paper, we propose an integrated model based on IoT architecture and cloud computing telehealth center. Analytics module is presented as a solution to control an ideal diagnostic about some diseases. Specific features are then compared with the recently deployed conventional models in telemedicine. The main advantage of this model is the availability of controlling the security and privacy about patient information and the optimization on processing and acquiring clinical parameters according to technical characteristics.
Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Recently, feedback control systems using random dither
quantizers have been proposed for linear discrete-time systems.
However, the constraints imposed on state and control variables
have not yet been taken into account for the design of feedback
control systems with random dither quantization. Model predictive
control is a kind of optimal feedback control in which control
performance over a finite future is optimized with a performance
index that has a moving initial and terminal time. An important
advantage of model predictive control is its ability to handle
constraints imposed on state and control variables. Based on the
model predictive control approach, the objective of this paper is to
present a control method that satisfies probabilistic state constraints
for linear discrete-time feedback control systems with random dither
quantization. In other words, this paper provides a method for
solving the optimal control problems subject to probabilistic state
constraints for linear discrete-time feedback control systems with
random dither quantization.
Development of a Real-Time Brain-Computer Interface for Interactive Robot Therapy: An Exploration of EEG and EMG Features during Hypnosis
This study presents a framework for development of a
new generation of therapy robots that can interact with users by
monitoring their physiological and mental states. Here, we focused
on one of the controversial methods of therapy, hypnotherapy.
Hypnosis has shown to be useful in treatment of many clinical
conditions. But, even for healthy people, it can be used as an
effective technique for relaxation or enhancement of memory and
concentration. Our aim is to develop a robot that collects information
about user’s mental and physical states using electroencephalogram
(EEG) and electromyography (EMG) signals and performs costeffective
hypnosis at the comfort of user’s house. The presented
framework consists of three main steps: (1) Find the EEG-correlates
of mind state before, during, and after hypnosis and establish a
cognitive model for state changes, (2) Develop a system that can
track the changes in EEG and EMG activities in real time and
determines if the user is ready for suggestion, and (3) Implement our
system in a humanoid robot that will talk and conduct hypnosis on
users based on their mental states. This paper presents a pilot study in
regard to the first stage, detection of EEG and EMG features during
An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient
Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.
Privacy-Preserving Location Sharing System with Client/Server Architecture in Mobile Online Social Network
Location sharing is a fundamental service in mobile Online Social Networks (mOSNs), which raises significant privacy concerns in recent years. Now, most location-based service applications adopt client/server architecture. In this paper, a location sharing system, named CSLocShare, is presented to provide flexible privacy-preserving location sharing with client/server architecture in mOSNs. CSLocShare enables location sharing between both trusted social friends and untrusted strangers without the third-party server. In CSLocShare, Location-Storing Social Network Server (LSSNS) provides location-based services but do not know the users’ real locations. The thorough analysis indicates that the users’ location privacy is protected. Meanwhile, the storage and the communication cost are saved. CSLocShare is more suitable and effective in reality.
An Approach for Vocal Register Recognition Based on Spectral Analysis of Singing
Recognizing and controlling vocal registers during
singing is a difficult task for beginner vocalist. It requires among
others identifying which part of natural resonators is being used
when a sound propagates through the body. Thus, an application
has been designed allowing for sound recording, automatic vocal
register recognition (VRR), and a graphical user interface providing
real-time visualization of the signal and recognition results. Six
spectral features are determined for each time frame and passed to the
support vector machine classifier yielding a binary decision on the
head or chest register assignment of the segment. The classification
training and testing data have been recorded by ten professional
female singers (soprano, aged 19-29) performing sounds for both
chest and head register. The classification accuracy exceeded 93%
in each of various validation schemes. Apart from a hard two-class
clustering, the support vector classifier returns also information on
the distance between particular feature vector and the discrimination
hyperplane in a feature space. Such an information reflects the level
of certainty of the vocal register classification in a fuzzy way. Thus,
the designed recognition and training application is able to assess and
visualize the continuous trend in singing in a user-friendly graphical
mode providing an easy way to control the vocal emission.
An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
With the development of HyperSpectral Imagery
(HSI) technology, the spectral resolution of HSI became denser,
which resulted in large number of spectral bands, high correlation
between neighboring, and high data redundancy. However, the
semantic interpretation is a challenging task for HSI analysis
due to the high dimensionality and the high correlation of the
different spectral bands. In fact, this work presents a dimensionality
reduction approach that allows to overcome the different issues
improving the semantic interpretation of HSI. Therefore, in order
to preserve the spatial information, the Tensor Locality Preserving
Projection (TLPP) has been applied to transform the original HSI.
In the second step, knowledge has been extracted based on the
adjacency graph to describe the different pixels. Based on the
transformation matrix using TLPP, a weighted matrix has been
constructed to rank the different spectral bands based on their
contribution score. Thus, the relevant bands have been adaptively
selected based on the weighted matrix. The performance of the
presented approach has been validated by implementing several
experiments, and the obtained results demonstrate the efficiency
of this approach compared to various existing dimensionality
reduction techniques. Also, according to the experimental results,
we can conclude that this approach can adaptively select the
relevant spectral improving the semantic interpretation of HSI.
Neuron-Based Control Mechanisms for a Robotic Arm and Hand
A robotic arm and hand controlled by simulated
neurons is presented. The robot makes use of a biological neuron
simulator using a point neural model. The neurons and synapses are
organised to create a finite state automaton including neural inputs
from sensors, and outputs to effectors. The robot performs a simple
pick-and-place task. This work is a proof of concept study for a
longer term approach. It is hoped that further work will lead to
more effective and flexible robots. As another benefit, it is hoped that
further work will also lead to a better understanding of human and
other animal neural processing, particularly for physical motion. This
is a multidisciplinary approach combining cognitive neuroscience,
robotics, and psychology.
Robust Control of a Dynamic Model of an F-16 Aircraft with Improved Damping through Linear Matrix Inequalities
This work presents an application of Linear Matrix
Inequalities (LMI) for the robust control of an F-16 aircraft through
an algorithm ensuring the damping factor to the closed loop system.
The results show that the zero and gain settings are sufficient to ensure
robust performance and stability with respect to various operating
points. The technique used is the pole placement, which aims to put
the system in closed loop poles in a specific region of the complex
plane. Test results using a dynamic model of the F-16 aircraft are
presented and discussed.
Incremental Learning of Independent Topic Analysis
In this paper, we present a method of applying
Independent Topic Analysis (ITA) to increasing the number of
document data. The number of document data has been increasing
since the spread of the Internet. ITA was presented as one method
to analyze the document data. ITA is a method for extracting the
independent topics from the document data by using the Independent
Component Analysis (ICA). ICA is a technique in the signal
processing; however, it is difficult to apply the ITA to increasing
number of document data. Because ITA must use the all document
data so temporal and spatial cost is very high. Therefore, we
present Incremental ITA which extracts the independent topics from
increasing number of document data. Incremental ITA is a method
of updating the independent topics when the document data is added
after extracted the independent topics from a just previous the data.
In addition, Incremental ITA updates the independent topics when the
document data is added. And we show the result applied Incremental
ITA to benchmark datasets.
Definition of a Computing Independent Model and Rules for Transformation Focused on the Model-View-Controller Architecture
This paper presents a model-oriented development approach to software development in the Model-View-Controller (MVC) architectural standard. This approach aims to expose a process of extractions of information from the models, in which through rules and syntax defined in this work, assists in the design of the initial model and its future conversions. The proposed paper presents a syntax based on the natural language, according to the rules agreed in the classic grammar of the Portuguese language, added to the rules of conversions generating models that follow the norms of the Object Management Group (OMG) and the Meta-Object Facility MOF.
Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea
Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.
E-Learning Recommender System Based on Collaborative Filtering and Ontology
In recent years, e-learning recommender systems has attracted great attention as a solution towards addressing the problem of information overload in e-learning environments and providing relevant recommendations to online learners. E-learning recommenders continue to play an increasing educational role in aiding learners to find appropriate learning materials to support the achievement of their learning goals. Although general recommender systems have recorded significant success in solving the problem of information overload in e-commerce domains and providing accurate recommendations, e-learning recommender systems on the other hand still face some issues arising from differences in learner characteristics such as learning style, skill level and study level. Conventional recommendation techniques such as collaborative filtering and content-based deal with only two types of entities namely users and items with their ratings. These conventional recommender systems do not take into account the learner characteristics in their recommendation process. Therefore, conventional recommendation techniques cannot make accurate and personalized recommendations in e-learning environment. In this paper, we propose a recommendation technique combining collaborative filtering and ontology to recommend personalized learning materials to online learners. Ontology is used to incorporate the learner characteristics into the recommendation process alongside the ratings while collaborate filtering predicts ratings and generate recommendations. Furthermore, ontological knowledge is used by the recommender system at the initial stages in the absence of ratings to alleviate the cold-start problem. Evaluation results show that our proposed recommendation technique outperforms collaborative filtering on its own in terms of personalization and recommendation accuracy.
Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.
Supporting Embedded Medical Software Development with MDevSPICE® and Agile Practices
Emerging medical devices are highly relying on embedded software that runs on the specific platform in real time. The development of embedded software is different from ordinary software development due to the hardware-software dependency. MDevSPICE® has been developed to provide guidance to support such development. To increase the flexibility of this framework agile practices have been introduced. This paper outlines the challenges for embedded medical device software development and the structure of MDevSPICE® and suggests a suitable combination of agile practices that will help to add flexibility and address corresponding challenges of embedded medical device software development.
Bee Colony Optimization Applied to the Bin Packing Problem
We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.
A Comparison of Image Data Representations for Local Stereo Matching
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.
Digital Cinema Watermarking State of Art and Comparison
Nowadays, the vigorous popularity of video processing techniques has resulted in an explosive growth of multimedia data illegal use. So, watermarking security has received much more attention. The purpose of this paper is to explore some watermarking techniques in order to observe their specificities and select the finest methods to apply in digital cinema domain against movie piracy by creating an invisible watermark that includes the date, time and the place where the hacking was done. We have studied three principal watermarking techniques in the frequency domain: Spread spectrum, Wavelet transform domain and finally the digital cinema watermarking transform domain. In this paper, a detailed technique is presented where embedding is performed using direct sequence spread spectrum technique in DWT transform domain. Experiment results shows that the algorithm provides high robustness and good imperceptibility.
Application of Fractional Model Predictive Control to Thermal System
The article presents an application of Fractional Model Predictive Control (FMPC) to a fractional order thermal system using Controlled Auto Regressive Integrated Moving Average (CARIMA) model obtained by discretization of a continuous fractional differential equation. Moreover, the output deviation approach is exploited to design the K -step ahead output predictor, and the corresponding control law is obtained by solving a quadratic cost function. Experiment results onto a thermal system are presented to emphasize the performances and the effectiveness of the proposed predictive controller.