keyboard_arrow_up
Accepted Papers
Contextual Factors with an Impact on the Design and Management of Health Information Systems’ Interoperability

Grace Kobusinge1,2, 1Gothenburg University, Gothenburg, Sweden and 2Makerere University, P.O. Box 7062 Kampala, Uganda

ABSTRACT

Due to their renowned great information processing and dissemination power, Health information systems (HIS) can readily avail past patient medical information across the continuum of care in order to facilitate on going treatment. However, a number of existing HIS are designed as vertical silos with no interoperability onuses and therefore, cannot exchange patient information. At the same time little knowledge is known about the intricacies & factors that surround HIS’ interoperability implementations. This study therefore, employs an institutional lens perspective to investigate contextual factors with an impact on HIS’ interoperability designing. Through this perspective the following seven contextual factors were arrived at: institutional autonomism, intended system goals, existing health-information-systems, national HIS implementation guidelines, interoperability standards, policy and resources. A further study implication is the use of institutional lens in making sense of the systems’ context of integration in order to discover salient factors that might impact Health-information-systems’ interoperability designing.

KEYWORDS

Health Information Systems’ Interoperability, Contextual Factors, Health Information Systems’ Designing.


Readiness Analysis for Enterprise Information System Implementation: the Case Study of Nigeria Manufacturing Companies

Nwanneka Eya1 and Rufai Ahmad2, 1Department of Computer and Information Science, University of Strathclyde, Glasgow, Scotland and 2University of Strathclyde, G1 1XH, Glasgow, Scotland, United Kingdom

ABSTRACT

Enterprise information systems plays an important role in manufacturing companies by integrating the firm’s information, operating procedure and its functions in all department; resulting in a better operation in the global business environment. In developing countries like Nigeria, most manufacturing firms have been facing the need to compete efficiently in the global markets. This is because of the Nigerian dynamic business environment that continues evolving and the enormous government support for indigenous manufacturers. Therefore, the need for an enterprise information system cannot be underemphasized, but because an enterprise information system is a major investment that is expensive and time consuming; the need to assess if a company is ready for such a major transition becomes very important. In assessing the readiness of Nigerian manufacturing companies for ERP implementation, thereare many factors to consider. This study assesses the readiness level of Nigerian manufacturing companies’ base on the survey responses from a wide spectrum of Nigerian manufacturing firms. The findings showed that “readiness level” are mainly influenced by technological , organizational and environmental factors which basically involved assumed benefits, assumed difficulty, technological architecture, technological skills, competitive pressure, organization size and information management priority. It was observed that technological factors had more impact in determining the readiness level of any firm. This paper suggests a structure or standard that Nigerian manufacturers could use to ascertain their company’s readiness level before embarking on an investing in enterprise information system.

KEYWORDS

Enterprise information system, Readiness analysis, Nigeria, Manufacturing, Company.


Transcript Level Analysis Improves the Understanding of Bladder Cancer

Xiang Ao and Shuaicheng Li, Department of Computer Science, City University of Hong Kong, Hong Kong

ABSTRACT

Bladder cancer (BC) is one of the most globally prevalent diseases, attracting various studies on BC relevant topics. High-throughput sequencing renders it convenient to extensively explore genetic changes, like the variation in gene expression, in the development of BC. In this study, we did differential analysis on gene and transcript expression (DGE and DTE) and differential transcript usage (DTU) analysis in an RNA-seq dataset of 42 bladder cancer patients. DGE analysis reported 8543 significantly differentially expressed (DE) genes. In contrast, DTE analysis detected 14350 significantly DE transcripts from 8371 genes, and DTU analysis detected 27914 significantly differentially used (DU) transcripts from 8072 genes. Analysis of the top 5 DE genes demonstrated that DTE and DTU analysis provided the source of changes in gene expression at the transcript level. The transcript-level analysis also identified some DE and DU transcripts from previously reported mutated genes that related to BC, like ERBB2, ESPL1, and STAG2, suggesting an intrinsic connection between gene mutation and alternative splicing. Hence, the transcript-level analysis may help disclose the underlying pathological mechanism of BC and further guide the design of personal treatment.

KEYWORDS

Bladder Cancer, Differential Gene Expression, Differential Transcript Expression, Differential Transcript Usage


Automatic Identification and Measurement of Antibiogram Analysis

Merve Duman, Roya Choupani and Faris Serdar Tasel, Department of Computer Engineering, Cankaya University, Ankara, Turkey

ABSTRACT

In this study, an automatic identification method of antibiogram analysis is implemented, existing methods are investigated and results are discussed. In an antibiogram analysis, inhibition zones of drugs read by humans might be measured with some mistakes. These mistakes such as misreading during the analysis process or the conditions like imperfect or partial seeding inhibition zones can be solved with automatic identification methods. Also, there is a need for periodically reading or a tracking system because inhibition zones change with time. To overcome antibiogram analysis problems, some improvements are made on the image. As pre-processing operations, Otsu Thresholding, largest object finding, binary image mask, morphological erosion and closing operations are applied. Circular Hough Transform is used to find drugs and profile lines are drawn to find inhibition zones. The Otsu thresholding is used to determine the zone borders. The results obtained from the algorithm are evaluated and discussed.

KEYWORDS

Antibiogram Analysis, Image Processing, Feature Extraction, Object Detection, Image Segmentation


Automatic Diagnosis of Acne and Rosacea using Deep Learning

Firas Gerges and Frank Y.Shih, Department of Computer Science, New Jersey Institute of Technology, Newark, New Jersey, USA

ABSTRACT

Acne and Rosacea are two common skin diseases that affect many people in United States. These two skin conditions can result in similar symptoms, which leads patients to mistake their case. In this paper, we aim to develop a model that can efficiently differentiate between these two skin conditions. A deep learning model is proposed in order to automatically distinguish Rosacea from Acne cases using infected skin images. Using image augmentation, the size of the original dataset can be enlarged. Experimental results show that our model achieves a very high performance with an average testing accuracy of 99%.

KEYWORDS

Acne, Rosacea, Deep Learning, Image Processing, Convolutional Neural Networks


Spectrophotometric Analysis Based Data Fusion for Shape Recovery

Zhi Yang1 and Youhua Yu2, 1School of Humanities, Jinggangshan University, Ji’an, P.R.China and 2Matrix Technology(Beijing Ltd.), Daxing, Beijing, P.R.China

ABSTRACT

It has been a fascinating idea in the academic world( especially in our computer vision community), of fusing data from optical cameras with those from other physical sensors to form a uniform surface. Due to realistic limits, research on the subject had endured a long and arduous journey, and would have a much more obscure perspective, if it were not for breakthrough in way of shape recovery. Benefiting from the advancement, in this paper, we propose a updated version of Luminance Integration(LI) method. The key achievement in our work, is introduction of Spectrophometric Analysis(SA), which handles luminance/intensity values in a fully-justified reflectance spectroscopic fashion, to resolve confusion brought from colorimetric models and photoelectric equipment. In addition to this, a framework of statistical spatial alignment is used for data fusion, where geometrical and semantic inferences are given. Particularly, the process of alignment generally starts with a series of spatial transformations, based on assumption that the transformations are able to diminish unwanted variance components. What’s more, this voxel-based analysis all derive from the same rigid body of an object, so the Magnitude of Relative Error (MRE), caused by regionally-specific effects from frame of reference, can be reduced to the minimum. In the end, in an extensive series of experiments, we carefully evaluate parametric models we have been constructing, for the purpose of refining the shape alignment with proper deterministic and semantic inferences. Our results show that our method effectively improve the accuracy of shape recovery and the “on-line performance” of matching correspondent spatial data, especially those from optical cameras and depth/distance sensors.

KEYWORDS

Spectrophotometric Analysis, Data Fusion, Shape Recovery, Point Cloud, Surface Alignment


Safety Helmet Detection in Industrial Environment Using Deep Learning

Ankit Kamboj and Nilesh Powar, Advanced Analytics Team, Cummins Technologies India Pvt. Ltd, Pune, India

ABSTRACT

Safety is of predominant value for employees who are working in an industrial and construction environment. Real time Object detection is an important technique to detect violations of safety compliance in an industrial setup. The negligence in wearing safety helmets could be hazardous to workers, hence the requirement of the automatic surveillance system to detect persons not wearing helmets is of utmost importance and this would reduce the labor-intensive work to monitor the violations. In this paper, we deployed an advanced Convolutional Neural Network (CNN) algorithm called Single Shot Multibox Detector (SSD) to monitor violations of safety helmets. Various image processing techniques are applied to all the video data collected from the industrial plant. The practical and novel safety detection framework is proposed in which the CNN first detects persons from the video data and in the second step it detects whether the person is wearing the safety helmet. Using the proposed model, the deep learning inference benchmarking is done with Dell Advanced Tower workstation. The comparative study of the proposed approach is analysed in terms of detection accuracy (average precision) which illustrates the effectiveness of the proposed framework.

KEYWORDS

Safety Helmet Detection, Deep Learning, SSD, CNN, Image Processing


Cobb Angle Measurement from Radiograph Images Using CADX

Hritam Basak and Sreejeet Maity, Electrical Engineering Department, Jadavpur University, Kolkata, India

ABSTRACT

In this paper we try to propose an automated method for cobb angle computation from radiograph (x- ray) images of scoliosis patients where the objective is to have increased reliability of spinal curvature quantification. The automatic technique mainly comprises of four steps, namely: pre-processing (denoising and filtering), region of interest (roi) identification, feature extraction and cobb angle computation from the extracted spine centre-line.svm (support vector machine) classifier is used for object identification and feature extraction. It is assumed that spine is a continuous structure instead of a series of discrete vertebral bodies with individual orientations. Several methods are used for the identification of centre-line of spine: morphological operation, gaussian blurring and polynomial fit. Now tangents are taken at every point of the extracted centre-line and thus we can evaluate the cobb angle from these sets of tangents. For the analysis of the automated diagnosis process, the approach was evaluated on the basis of 25 coronal x-ray images. Region of interest identification which is based on svm classifier is effective enough at a specificity of 100% and approximately 58% results in the extraction of centre-line from this roi were accurate where the angular variability is very less or negligible. Due to poor radiation dose and several other reasons, the endplates and edges of spine in radiograph images were blurred and hence the continuous contour based approach gives better reliability.

KEYWORDS

Scoliosis, Cobb Angle, CADx


Rate Control Based on Similarity Analysis in Multi-view Video Coding

Tao Yan1, In-Ho Ra2, Shaojie Hou1, Jingyu Feng1, and Zhicheng Wu2, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

The joint video expert group proposed a JMVM reference model for multi-view video coding, but the model did not give an effective rate control scheme. This paper proposes a rate control algorithm for multi-view video coding (MVC) based on correlation analysis. The core of the algorithm is to first divide all images into six types of coded frames according to the structural relationship between disparity prediction and motion prediction, which improve the binomial rate distortion model, and then perform the analysis between different views based on similarity analysis. The bit rate control is divided into a four-layer structure for bit rate control of multi-view video coding. Among them, the frame-layer code rate control considers the layer B frame and other factors to allocate the code rate, and the basic unit-layer code rate control uses different quantization parameters according to the content complexity of the macroblock. The average error between the actual bit rate and the target bit rate of the bit rate control algorithm can be controlled by 0.94%.

KEYWORDS

Multi-view video coding, Rate control, Bit allocation, Similarity analysis, Basic unit layer


A Hybrid Artificial Bee Colony Strategy for T-way Test Set Generation with Constraints Support

Ammar K Alazzawi1*, Helmi Md Rais1, Shuib Basri1, Yazan A. Alsariera4, Abdullateef Oluwagbemiga Balogun1,2, Abdullahi Abubakar Imam1,3, 1Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak, Malaysia, 2Department of Computer Science, University of Ilorin, PMB 1515, Ilorin, Nigeria, 3Department of Computer Science, Ahmadu Bello University, Zaria, Nigeria and 4Department of Computer Science, Northern Border University, Arar 73222, Saudi Arabia

ABSTRACT

t-way interaction testing is a systematic approach for exhaustive test set generation. It is a vital test planning method in software testing which generates test sets based on interaction amongst parameters to cover every possible test sets combinations. t-way strategy clarifies the interaction strength between the number of parameters. However, there are some test sets combinations that should be excluded when generating the final test set as a result of invalid outputs, impossible or unwanted test sets combinations (e.g. system requirements set). These types of set combinations are known as constraints combinations or forbidden combinations. From existing studies, several t-way strategies have been proposed to address the test set combination problem, however, generating the optimal test set is still open research being an NP-hard problem. Therefore, this study proposed a novel hybrid artificial bee colony (HABC) t-way test set generation strategy with constraints support. The proposed approach is based on a hybrid artificial bee colony (ABC) algorithm with a particle swarm optimization (PSO) algorithm. PSO was integrated as the exploratory agent for the ABC hence the hybrid nature. The information sharing ability of PSO via the Weight Factor is used to enhance the performance of ABC. The output of the hybrid ABC is a set of promising optimal test set combinations. The results of the experiments showed that HABC outperformed and yielded better test sets than existing methods (HSS, LAHC, SA_SAT, PICT, TestCover, mATEG_SAT).

KEYWORDS

Software testing, t-way testing, hybrid artificial bee colony, meta-heuristics, optimization problem.


A Novel Bit Allocation Algorithm in Multi-view Video

Tao Yan1, In-Ho Ra2, Jingyu Feng1, Linyun Huang1 and Zhicheng Wu1, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

The difficulty of rate control for Multi-view video coding(MVC) is how to allocate bits between views. The results of our previous research including the bit allocation among viewpoints uses the correlation analysis among viewpoints to predict the weight of each viewpoint. But when the scene changes, this prediction method will produce a lot of errors. Therefore, this article avoids this situation happening through scene detection. The core of the algorithm is to first divide all images into 6 types of encoded frames according to the structural relationship between disparity prediction and motion prediction, and improve the binomial rate distortion model, and then perform inter-view, frame layer, and basic unit based on the encoded information. Layer bit allocation and code rate control. In this paper, a reasonable bit rate is allocated between viewpoints based on the encoded information, and the frame layer bit rate is allocated using frame complexity and time-domain activity. Experimental simulation results show that the algorithm can effectively control the bit rate of MVC, while maintaining efficient coding efficiency, compared with the current MVC using JVT with fixed quantization parameters.

KEYWORDS

Multi-view video coding, Rate control, Bit allocation, Rate distortion model, Basic unit layer


A Semi-supervised Learning Approach to Forecast CPU Usages Under Peak Load in an Enterprise Environment

Nitin Khosla1 and Dharmendra Sharma2, 1Assistant Director – Performance Engineering, Dept. of Home Affairs, Australia and 2Professor – Computer Science, University of Canberra, Australia

ABSTRACT

The aim of a semi-supervised neural net learning approach in this paper is to apply and improve the supervised classifiers and to develop a model to predict CPU usages under unpredictable peak load (under stress conditions) in a large enterprise applications environment with several hundred applications hosted and with large number of concurrent users. This method forecasts the likelihood of extreme use of CPU because of a burst in web traffic mainly due to web-traffic from large number of concurrent users. This model predicts the CPU utilization under extreme load (stress) conditions. Large number of applications run simultaneously in a real time system in an enterprise large IT system. This model extracts features by analysing the work-load patterns of the user demand which are mainly hidden in the data related to key transactions of core IT applications. This method creates synthetic workload profiles by simulating synthetic concurrent users, then executes the key scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load (stress) conditions. We have used Expectation Maximization method with different dimensionality and regularization, attempting to extract and analyse the parameters that improves the likelihood of the model by maximizing and after marginalizing out the unknown labels. Workload demand prediction with semi-supervised learning has tremendous potential tin capacity planning to optimize and manage IT infrastructure at a lower risk.

KEYWORDS

Semi-supervised learning, Performance Engineering, Stress testing, Neural Nets, Machine learning applications


Advanced Rate Control Technologies for MVC

Tao Yan1, In-Ho Ra2, Zhicheng Wu1, Shaojie Hou1 and Jingyu Feng1, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

Current multi-view video coding(MVC) reference model in joint video team(JVT) does not provide efficient rate control schemes, this paper proposes rate control algorithm for multi-view video coding. Aiming at the situation that the multi-view video coding (MVC) bit rate control has not been thoroughly studied at present, based on the analysis of the shortage of rate distortion models and the characteristics of multi-view video coding in the existing video bit rate control, the paper proposes A multi-view video coding rate control algorithm based on the quadratic rate distortion (RD) model is presented. The core of the algorithm is to first divide all images into 6 types of encoded frames according to the structural relationship between disparity prediction and motion prediction, and improve the binomial rate distortion model, and then perform inter-view, frame layer, and basic unit based on the encoded information. Layer bit allocation and code rate control. In this paper, a reasonable bit rate is allocated between viewpoints based on the encoded information, and the frame layer bit rate is allocated using frame complexity and time-domain activity. Experimental simulation results show that the algorithm can effectively control the bit rate of multi-view video coding, while maintaining efficient coding efficiency, compared with the current MVC using JVT with fixed quantization parameters.

KEYWORDS

MVC(multi-view video coding), Rate control, Bit allocation, Human visual characteristics


Placement of Virtualized Elastic Regenerator with Threshold of Path Length in Distance-adaptive Translucent Elastic Optical Networks

Ching-Fang Hsu, Hao-Chen Kao, Yun-Chung Ho, Eunice Soh and Ya-Hui Jhang, Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan

ABSTRACT

Distance-adaptive Elastic Optical Networks (EONs) is an efficient network environment, traffic demands in it can flexibly assign the number of Frequency slots (FSs) according to the bandwidth. In distanceadaptive EON, routing, modulation format, and spectrum assignment (RMSA) is the main issue to be solved. Appropriately choosing a routing path and trying to select a better modulation format, the spectral resource can be allocated efficiently. However, due to maximum transmission distance (MTD) constraints, if the transmission distance exceeds MTD, the optical path may fail to be setup unless virtualized elastic regenerators (VERs) are exploited. VERs can regenerate signal and reset the accumulated length of experienced path. For deployment cost reduction, placing VERs strategically would be an important issue. Such a network with limited amount of VERs is so-called translucent EON. In previous studies, neither MTD nor the criticality of a node in all shortest paths and hence the spectrum efficiency is not significant. In this work, we proposed two VER placement strategies to compare with related literature. Our strategies tend to place VERs on the nodes in longer transmission segments and result in better FS consumption. Simulation results demonstrate that the proposed schemes attain notable improvement on blocking performance.

KEYWORDS

Elastic Optical Networks (EONs); Distance Adaptive; Translucent Optical Networks; Virtualized Elastic Regenerator (VER); Routing, Modulation Format and Spectrum Assignment (RMSA); VER Placement


A Hybrid Machine Learning Model with Cost-Function based Outlier Removal and its Application on Credit Rating

Aurora Y. Mu, Department of Mathmatics, Western Connecticut State University, Danbury, Connecticut, United States of America

ABSTRACT

This paper establishes a methodology to build hybrid machine learning models, aiming to combine the power of different machine learning algorithms on different types of features and hypothesis. A generic cost-based outlier removal algorithm is introduced as a step of preprocess of training data, we implement a hybrid machine learning model for a crediting problem, and experiment combination of three types of machine learning algorithms SVM, DT and LR. The new hybrid models shows improvement in performance compared to the traditional single SVM, DT, and LR. This new methodology can be further explored with other algorithms and applications.

KEYWORDS

Machine Learning, Outlier Removal, Credit Score Modelling, Hybrid Learning Model


A Non-Convex Optimization Framework for Large-Scale Low-rank Matrix Factorization

Sajad Fathi Hafshejaniy1, Saeed Vahidian3, Zahra Moaberfard2 and Bill Lin3, 1Department of Computer Science, McGill University, Montreal, Canada, 2Department of Computer Science, Apadana University, Shiraz, Iran and 3Department of Electrical and Computer Engineering, University of California San Diego, CA, USA

ABSTRACT

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adapting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter ηk, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.

KEYWORDS

Barzilai and Borwein Method, Matrix factorization, Non-Convex, Nonmonotone method


Oversampling Log Messages Using A Sequence Generative Adversarial Network for Anomaly Detection and Classification

Amir Farzad and T. Aaron Gulliver, Department of Electrical and Computer Engineering, University of Victoria, PO Box 1700, STN CSC, Victoria, BC Canada

ABSTRACT

Dealing with imbalanced data is one the main challenges in machine/deep learning algorithms for classification. This issue is more important with log message data as it is typically imbalanced and negative logs are rare. In this paper, a model is proposed to generate text log messages using a SeqGAN network. Then features are extracted using an Autoencoder and anomaly detection and classification is done using a GRU network. The proposed model is evaluated with two imbalanced log data sets, namely BGL and Openstack. Results are presented which show that oversampling and balancing data increases the accuracy of anomaly detection and classification.

KEYWORDS

Deep Learning, Oversampling, Log messages, Anomaly detection, Classification


Playing Virtual Musical Drums by Mems 3D Accelerometer Sensor Data and Machine Learning

Shaikh Farhad Hossain, Kazuhisa Hirose, Shigehiko Kanaya and Md. Altaf-Ul-Amin, Computational Systems Biology Lab, Graduate School of Information Science, Nara Institute of Science and Technology (NAIST), 8916-5, Takayama, Ikoma, Nara 630-0192, Japan

ABSTRACT

Music is an entertainment part of our lives who’s the important supporting elements are musical instruments. The acoustic drum plays a vital role when a song is sung. With the era, the style of the musical instruments is changing by keeping identical tune such as electronic drum. In this work, we have developed "Virtual Musical Drums" by the combination of MEMS 3D accelerometer sensor data and machine learning. Machine learning is spreading in all arena of problem-solving and MEMS sensor is converting the large physical system to small system. In this work, we have designed eight virtual drums for two sensors. We have found 91.42% detection accuracy at simulation environment and 88.20% detection accuracy at real-time environment with 20% windows overlapping. Although system detection accuracy was satisfying but the virtual drum sound was non-realistic. Then, we implement 'multiple hit detection within a fixed interval, sound intensity calibration and sound tune parallel processing' and select 'virtual musical drums sound files' based on acoustic drum sound pattern and duration. Finally, we have completed our "Playing Virtual Musical Drums" and played the virtual drum successfully like an acoustic drum. This work has shown a different application of MEMS sensor and machine learning. It shows more, data not only for information but also music entertainment.

KEYWORDS

Virtual musical drum, MEMS, SHIMMER, support vector machines (SVM) and k-Nearest Neighbors (kNN)


Industrial Duct Fan Maintenance Predictive Approach Based on Random Forest

Mashael Maashi, Nujood Alwhibi, Fatima Alamr, Rehab Alzahrani, Alanoud Alhamid and Nourah Altawallah, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia

ABSTRACT

When manufacturers equipment encounters an unexpected failure, or undergo unnecessary maintenance pre-scheduled plan, which happens for a total of millions of hours worldwide annually, this is time-consuming and costly. Predictive maintenance can help with the use of modern sensing technology and sophisticated data analytics to predict the maintenance required for machinery and devices. The demands of modern maintenance solutions have never been greater. The constant pressure to demonstrate enhanced cost-effectiveness return on investment and improve the competitiveness of the organization is always combined with the pressure of improving equipment productivity and keep machines running at the maximum output. In this paper, we propose maintenance prediction approach based on a machine learning technique namely random forest algorithm. The main focus is on the industrial duct fans as it is one of the most common equipment in most manufacturing industries. The experimental results show the accuracy, reliability of proposed Predictive Maintenance approach.

KEYWORDS

Predictive Maintenance, Maintenance, Random Forest, Machine Learning & Artificial Intelligence


Solving University Timetable Problem using Evolution Strategy

Che-Yuan Yang and Jian-Hung Chen, Department of Computer Science, Chung Hua University, Hsinchu, Taiwan

ABSTRACT

Solve timetable problem with evolution strategy; build real system for any department in university that can used easily.

KEYWORDS

University timetable problem, Evolution algorithm, evolution strategy.


Network Performances over Virtual Forces Algorithm For Autonomous Aerial Drone Systems

MAHAMAT Moussa Dogoumi Department of Electrical engineering University of Quebec in Trois-Rivieres Quebec, CANADA and Adam Skorek, Department of Electrical engineering, University of Quebec in Trois-Rivieres Quebec, CANADA

ABSTRACT

The FANET networks have seen some progress recently. Solutions for positioning network nodes are numerous. One in particular offers a comprehensive and interesting approach. It is VBCA. This method has the particularity of positioning the nodes in 3-D. In general, the 3-D positioning becomes an NP-Hard problem. But with VBCA positioning is relatively simple.In this paper we present an optimization of communication in a topology based on the VBCA (Virtual forces Based Clustering Algorithm). The topology is optimized by the VBCA algorithm for a better coverage area. This method is 40% more efficient than existing approaches in terms of area coverage. In the first versions of VBCA, the performances in terms of communication between nodes have not been tested. The resulting network performances are very encouraging, in terms of throughput, delay and packets loss. Therefore, this work provides a first answer to the performances in terms of network.

KEYWORDS

VBCA, FANET, Communication, Topology, Positioning, Clustering


DEEC Protocol with ACO Based Cluster Head Selection in Wireless Sensor Network

Ramesh Kumar Kait, Renu Jangra, Kurukshetra University, Kurukshetra, India

ABSTRACT

The protocols of routing in wireless sensor networks have a great role in the network performances as a good energetic organization, network’s lifetime and so on. The routing protocols are developed based on the different schemes like clustering, chaining, cost based etc. The Wireless Sensor Networks consists of a big quantity of nodes which are sometimes difficult to handle. So, the best way is to combine some nodes and make a cluster. Making a cluster is a technique called clustering; which puts a limit on the energy used by the sensor nodes. The communication and management of nodes in the cluster are handled with the help of the cluster head. The existing DEEC works efficiently during the communication and exist for a long time in the network. But, in this paper, the selection of cluster head among cluster nodes is done by the probability rule which is based on ACO. The cluster nodes send data to the cluster head, which further send related information to the base station.ACO-DEEC (Ant Colony Optimization based Distributed Energy Efficient Clustering protocol) calculates the probability rule to select the cluster head depend on the parameters: distance and power of the nodes. Therefore, this algorithm improves the energy usage, number of packets received at the base station, dead nodes better than existing DEEC protocol.

KEYWORDS

Ant Colony optimization, Wireless sensor Network, DEEC Protocol, ACO-DEEC, Cluster Head


Linking Social Media Posts to News with Siamers Transformers

Jacob Danovitch, Institute for Data Science Carleton University Ottawa, CA

ABSTRACT

Many computational social science projects examine online discourse surrounding a specific trending topic. These works often involve the acquisition of large-scale corpora relevant to the event in question to analyze aspects of the response to the event. Keyword searches present a precision-recall trade-off and crowd-sourced annotations, while effective, are costly. This work aims to enable automatic and accurate ad-hoc retrieval of comments discussing a trending topic from a large corpus, using only a handful of seed news articles.

KEYWORDS

political events, Specific-Correspondance LDA, sporting matches, Prepare for Different Social Media Sites, Competition Check, Prepare for Different Social Media Sites,SEO Keyword Research


Topic Detection from Conversational Dialogue Corpus with Parallel Latent Dirichlet Allocation Model and Elbow Method

Haider Khalid1 and Vincent Wade2, 1School of Computer Science and Statistics, Trinity College Dublin, University of Dublin, Dublin, Ireland and 2ADAPT Centre, Trinity College Dublin, University of Dublin, Dublin, Ireland

ABSTRACT

A conversational system needs to know how to switch between topics to continue the conversation for a more extended period. Detecting topics from dialogue corpus has become an important task for a conversation and accurate prediction of conversation topics is important for creating coherent and engaging dialogue systems. This paper is based on topic detection with Parallel Latent Dirichlet Allocation (PLDA) Model by clustering a vocabulary of known similar words based on TF-IDF scores and Bag of Words (BOW) approach. In the experiment, we use K-mean clustering with Elbow Method for interpretation and validation of consistency within-cluster analysis to select the optimal number of clusters. We evaluate our approach by comparing it with traditional LDA and clustering technique. The experimental results show that combining PLDA with Elbow method selects the optimal number of clusters and refine the topics for the conversation.

KEYWORDS

Conversational dialogue, latent Dirichlet allocation, topic detection, topic modelling, text-classification


A Most Discriminative and Highly Informative Feature Selection Method on Multilingual Text Data

Suman Dowlagar and Radhika Mamidi, Langauge Technologies and Research Center, KCIS, International Institute of Information Technology, Gachibowli, Hyderabad, India

ABSTRACT

The presence of irrelevant features in the data leads to high dimensionality and complexity in the machine learning models. Feature selection solves the problems of high dimensionality by discarding irrelevant features from the feature space, thus reducing the model complexity and enhancing accuracy. In this paper, we define the most discriminative and high informative (MDHI) feature selection method. Discriminative information between the features is computed using clustering and similarity metrics. The information gain between the class label and feature helps to select a highly informative feature. The MDHI method is evaluated on different datasets and is compared with various state-of-art feature selection methods. The results show that this method performs better for classification tasks.

KEYWORDS

feature selection, high dimensionality, dimensionality reduction, clustering, similarity, information gain


Hate Speech Detection of Arabic Short Text

Abdullah Aref1, Rana Husni Al Mahmoud2, Khaled Taha3, and Mahmoud Al-Sharif4, 1Computer Science Department, Princess Sumaya University for Technology,Amman, Jordan, 2Computer Science Department, University of Jordan, Amman, Jordan, 3Social Media Lab, Trafalgar AI, Amman, Jordan, 4Social Media Lab, Trafalgar AI, Amman, Jordan

ABSTRACT

The aim of sentiment analysis is to automatically extract the opinions from a certain text and decide its sentiment. In this paper, we introduce the first publicly-available Twitter dataset on Sunnah and Shia (SSTD), as part of a religious hate speech which is a sub problem of the general hate speech. We, further, provide a detailed review of the data collection process and our annotation guidelines such that a reliable dataset annotation is guaranteed. We employed many stand-alone classification algorithms on the Twitter hate speech dataset, including Random Forest, Complement NB, DecisionTree, and SVM and two deep learning methods CNN and RNN . We further study the influence of word embedding dimensions FastText and word2vec. In all our experiments, all classification algorithms are trained using a random split of data (66% for training and 34% for testing). The two datasets were stratified sampling of the original dataset. The CNN-FastText achieves the highest FMeasure (52.0%) followed by the CNN-Word2vec (49.0%), showing that neural models with FastText word embedding outperform classical feature-based models.

KEYWORDS

Hate Speech, Dataset, Text classification, Sentiment analysis


Quantum Criticism: A Tagged News Corpus Analysed for Sentiment and Named Entities

Ashwini Badgujar, Sheng Cheng, Andrew Wang, Kai Yu, Paul Intrevado and David Guy Brizan, University of San Francisco, San Francisco, CA, USA

ABSTRACT

In this project, we continuously collect data from the RSS feeds of traditional news sources. We apply several pre-trained implementations of named entity recognition (NER) tools, quantifying the success of each implementation. We also perform sentiment analysis of each news article at the document, paragraph and sentence level, with the goal of creating a corpus of tagged news articles that is made available to the public through a web interface. Finally, we show how the data in this corpus could be used to identify bias in news reporting.

KEYWORDS

Content Analysis, Named Entity Recognition, Sentiment Analysis


VSMBM: A New Metric for Automatically Generated Text Summaries Evaluation

Alaidine Ben Ayed, Ismaïl Biskri and Jean-Guy Meunier, Université du Québec à Montréal (UQAM), Canada

ABSTRACT

In this paper, we present VSMbM; a new metric for automatically generated text summaries evaluation. VSMbM is based on vector space modelling. It gives insights on to which extent retention and fidelity are met in the generated summaries. Two variants of the proposed metric, namely PCA-VSMbM and ISOMAP VSMbM, are tested and compared to Recall-Oriented Understudy for Gisting Evaluation (ROUGE): a standard metric used to evaluate automatically generated summaries. Conducted experiments on the Timeline17 dataset show that VSMbM scores are highly correlated to the state-of-the-art Rouge scores.

KEYWORDS

Automatic Text Summarization, Automatic summary evaluation, Vector space modelling.


Use of an IoT Technology to Analyse the Inertial Measurement of Smart Ping-pong Paddle

Rajeev Kanth1, Tuomas Korpi1 and Jukka Heikkonen2, 1School of Engineering and Technology, Savonia University of Applied Sciences, Opistotie 2, 70101 Kuopio Finland and 2Department of Future Technologies, University of Turku, Vesilinnantie 5, 20500 Turku Finland

ABSTRACT

In this article, a Smart Ping Pong Paddle has been introduced as an example of the use of sensor technology in sports. We have devised an accelerometer and a gyroscope sensor for the analyzing purpose and have gathered motion data of the game object to make a real-time 3D replica of the paddle to get an actual orientation of it. Technical details and principles of how to get the digital motion processing data from sensor to microcontroller and again transfer that wirelessly to a 3D modeling software are examined. Technical details are applied in practice, and the working demo of Smart Ping Pong paddle is built. Also, a couple of examples of other similar applications in the realm of object orientation sensing are overviewed.

KEYWORDS

Accelerometer Sensor, Gyroscope Sensor, Ping Pong Paddle, and Motion Analysis


ParaCom: An IoT based affordable solution enabling people with limited mobility to interact with machines

Siddharth Sekar, Nirmit Agarwal and Vedant Bapodra, Department of Computer Engineering, Mukesh Patel School of Technology Management and Engineering, Mumbai, India

ABSTRACT

This paper proposes a solution to improve the lives of patients who are paralyzed and/or suffering from different Motor Neuron Diseases (MND) like Amyotrophic Lateral Sclerosis (ALS), Primary Lateral Sclerosis etc. by making them more independent. Patients suffering from these diseases will not be able to move their arms and legs. They also lose their body balance and the ability to speak. Here we propose an IoT based communication controller using the concept of Morse Code Technology. The paper proposes integrating an IoT device with a Smartphone which can be controlled using the IoT device through the concept of Morse Code.

KEYWORDS

Internet of Things (IoT), Motor Neuron Disease (MND), Amyotrophic Lateral Sclerosis (ALS), Arduino


Dynamic Cipher for Enhanced Cryptography and Communication for Internet of Things

Prabjot Kaur, Mumbai, Maharashtra, India

ABSTRACT

Security represents a vital element for sanctioning the widespread adoption of Internet of Things technologies and applications. While not guarantees in terms of system-level confidentiality, credibility and privacy the relevant stakeholders are unlikely to adopt Internet of Things solutions on an oversized scale. In early-stage Internet of Things deployments (e.g., supported RFIDs only), security solutions have principally been devised in an advert hoc approach. This comes from the very fact that such deployments were sometimes vertically integrated, with all elements beneath the management of one body entity during this work we have a tendency to propose a brand new dynamic cipher to access quite one device at the same time during a network employing a single controller by creating use of Dynamic variable cipher security certificate protocol. This protocol uses key matrices thought during this protocol we have a tendency to create use of key matrices and store same key matrix the least bit the human action nodes. So once plain text is encrypted to cipher text at the causing facet, the sender transmits the cipher text while not the key that's to be won’t to decode the message. To access more than one device simultaneously in a network using a single controller by making use of Dynamic variable cipher security certificate protocol.

KEYWORDS

Internet of things, Security, Encryption, Dynamic Cipher


menu
Reach Us

emailicdipv@itcse2020.org


emailicdipconf@yahoo.com

close