keyboard_arrow_up
Accepted Papers
Contextual Factors with an Impact on the Design and Management of Health Information Systems’ Interoperability

Grace Kobusinge1,2, 1Gothenburg University, Gothenburg, Sweden and 2Makerere University, P.O. Box 7062 Kampala, Uganda

ABSTRACT

Due to their renowned great information processing and dissemination power, Health information systems (HIS) can readily avail past patient medical information across the continuum of care in order to facilitate on going treatment. However, a number of existing HIS are designed as vertical silos with no interoperability onuses and therefore, cannot exchange patient information. At the same time little knowledge is known about the intricacies & factors that surround HIS’ interoperability implementations. This study therefore, employs an institutional lens perspective to investigate contextual factors with an impact on HIS’ interoperability designing. Through this perspective the following seven contextual factors were arrived at: institutional autonomism, intended system goals, existing health-information-systems, national HIS implementation guidelines, interoperability standards, policy and resources. A further study implication is the use of institutional lens in making sense of the systems’ context of integration in order to discover salient factors that might impact Health-information-systems’ interoperability designing.

KEYWORDS

Health Information Systems’ Interoperability, Contextual Factors, Health Information Systems’ Designing.


Readiness Analysis for Enterprise Information System Implementation: the Case Study of Nigeria Manufacturing Companies

Nwanneka Eya1 and Rufai Ahmad2, 1Department of Computer and Information Science, University of Strathclyde, Glasgow, Scotland and 2University of Strathclyde, G1 1XH, Glasgow, Scotland, United Kingdom

ABSTRACT

Enterprise information systems plays an important role in manufacturing companies by integrating the firm’s information, operating procedure and its functions in all department; resulting in a better operation in the global business environment. In developing countries like Nigeria, most manufacturing firms have been facing the need to compete efficiently in the global markets. This is because of the Nigerian dynamic business environment that continues evolving and the enormous government support for indigenous manufacturers. Therefore, the need for an enterprise information system cannot be underemphasized, but because an enterprise information system is a major investment that is expensive and time consuming; the need to assess if a company is ready for such a major transition becomes very important. In assessing the readiness of Nigerian manufacturing companies for ERP implementation, thereare many factors to consider. This study assesses the readiness level of Nigerian manufacturing companies’ base on the survey responses from a wide spectrum of Nigerian manufacturing firms. The findings showed that “readiness level” are mainly influenced by technological , organizational and environmental factors which basically involved assumed benefits, assumed difficulty, technological architecture, technological skills, competitive pressure, organization size and information management priority. It was observed that technological factors had more impact in determining the readiness level of any firm. This paper suggests a structure or standard that Nigerian manufacturers could use to ascertain their company’s readiness level before embarking on an investing in enterprise information system.

KEYWORDS

Enterprise information system, Readiness analysis, Nigeria, Manufacturing, Company.


Transcript Level Analysis Improves the Understanding of Bladder Cancer

Xiang Ao and Shuaicheng Li, Department of Computer Science, City University of Hong Kong, Hong Kong

ABSTRACT

Bladder cancer (BC) is one of the most globally prevalent diseases, attracting various studies on BC relevant topics. High-throughput sequencing renders it convenient to extensively explore genetic changes, like the variation in gene expression, in the development of BC. In this study, we did differential analysis on gene and transcript expression (DGE and DTE) and differential transcript usage (DTU) analysis in an RNA-seq dataset of 42 bladder cancer patients. DGE analysis reported 8543 significantly differentially expressed (DE) genes. In contrast, DTE analysis detected 14350 significantly DE transcripts from 8371 genes, and DTU analysis detected 27914 significantly differentially used (DU) transcripts from 8072 genes. Analysis of the top 5 DE genes demonstrated that DTE and DTU analysis provided the source of changes in gene expression at the transcript level. The transcript-level analysis also identified some DE and DU transcripts from previously reported mutated genes that related to BC, like ERBB2, ESPL1, and STAG2, suggesting an intrinsic connection between gene mutation and alternative splicing. Hence, the transcript-level analysis may help disclose the underlying pathological mechanism of BC and further guide the design of personal treatment.

KEYWORDS

Bladder Cancer, Differential Gene Expression, Differential Transcript Expression, Differential Transcript Usage


Automatic Identification and Measurement of Antibiogram Analysis

Merve Duman, Roya Choupani and Faris Serdar Tasel, Department of Computer Engineering, Cankaya University, Ankara, Turkey

ABSTRACT

In this study, an automatic identification method of antibiogram analysis is implemented, existing methods are investigated and results are discussed. In an antibiogram analysis, inhibition zones of drugs read by humans might be measured with some mistakes. These mistakes such as misreading during the analysis process or the conditions like imperfect or partial seeding inhibition zones can be solved with automatic identification methods. Also, there is a need for periodically reading or a tracking system because inhibition zones change with time. To overcome antibiogram analysis problems, some improvements are made on the image. As pre-processing operations, Otsu Thresholding, largest object finding, binary image mask, morphological erosion and closing operations are applied. Circular Hough Transform is used to find drugs and profile lines are drawn to find inhibition zones. The Otsu thresholding is used to determine the zone borders. The results obtained from the algorithm are evaluated and discussed.

KEYWORDS

Antibiogram Analysis, Image Processing, Feature Extraction, Object Detection, Image Segmentation


Automatic Diagnosis of Acne and Rosacea using Deep Learning

Firas Gerges and Frank Y.Shih, Department of Computer Science, New Jersey Institute of Technology, Newark, New Jersey, USA

ABSTRACT

Acne and Rosacea are two common skin diseases that affect many people in United States. These two skin conditions can result in similar symptoms, which leads patients to mistake their case. In this paper, we aim to develop a model that can efficiently differentiate between these two skin conditions. A deep learning model is proposed in order to automatically distinguish Rosacea from Acne cases using infected skin images. Using image augmentation, the size of the original dataset can be enlarged. Experimental results show that our model achieves a very high performance with an average testing accuracy of 99%.

KEYWORDS

Acne, Rosacea, Deep Learning, Image Processing, Convolutional Neural Networks


Spectrophotometric Analysis Based Data Fusion for Shape Recovery

Zhi Yang1 and Youhua Yu2, 1School of Humanities, Jinggangshan University, Ji’an, P.R.China and 2Matrix Technology(Beijing Ltd.), Daxing, Beijing, P.R.China

ABSTRACT

It has been a fascinating idea in the academic world( especially in our computer vision community), of fusing data from optical cameras with those from other physical sensors to form a uniform surface. Due to realistic limits, research on the subject had endured a long and arduous journey, and would have a much more obscure perspective, if it were not for breakthrough in way of shape recovery. Benefiting from the advancement, in this paper, we propose a updated version of Luminance Integration(LI) method. The key achievement in our work, is introduction of Spectrophometric Analysis(SA), which handles luminance/intensity values in a fully-justified reflectance spectroscopic fashion, to resolve confusion brought from colorimetric models and photoelectric equipment. In addition to this, a framework of statistical spatial alignment is used for data fusion, where geometrical and semantic inferences are given. Particularly, the process of alignment generally starts with a series of spatial transformations, based on assumption that the transformations are able to diminish unwanted variance components. What’s more, this voxel-based analysis all derive from the same rigid body of an object, so the Magnitude of Relative Error (MRE), caused by regionally-specific effects from frame of reference, can be reduced to the minimum. In the end, in an extensive series of experiments, we carefully evaluate parametric models we have been constructing, for the purpose of refining the shape alignment with proper deterministic and semantic inferences. Our results show that our method effectively improve the accuracy of shape recovery and the “on-line performance” of matching correspondent spatial data, especially those from optical cameras and depth/distance sensors.

KEYWORDS

Spectrophotometric Analysis, Data Fusion, Shape Recovery, Point Cloud, Surface Alignment


Safety Helmet Detection in Industrial Environment Using Deep Learning

Ankit Kamboj and Nilesh Powar, Advanced Analytics Team, Cummins Technologies India Pvt. Ltd, Pune, India

ABSTRACT

Safety is of predominant value for employees who are working in an industrial and construction environment. Real time Object detection is an important technique to detect violations of safety compliance in an industrial setup. The negligence in wearing safety helmets could be hazardous to workers, hence the requirement of the automatic surveillance system to detect persons not wearing helmets is of utmost importance and this would reduce the labor-intensive work to monitor the violations. In this paper, we deployed an advanced Convolutional Neural Network (CNN) algorithm called Single Shot Multibox Detector (SSD) to monitor violations of safety helmets. Various image processing techniques are applied to all the video data collected from the industrial plant. The practical and novel safety detection framework is proposed in which the CNN first detects persons from the video data and in the second step it detects whether the person is wearing the safety helmet. Using the proposed model, the deep learning inference benchmarking is done with Dell Advanced Tower workstation. The comparative study of the proposed approach is analysed in terms of detection accuracy (average precision) which illustrates the effectiveness of the proposed framework.

KEYWORDS

Safety Helmet Detection, Deep Learning, SSD, CNN, Image Processing


Cobb Angle Measurement from Radiograph Images Using CADX

Hritam Basak and Sreejeet Maity, Electrical Engineering Department, Jadavpur University, Kolkata, India

ABSTRACT

In this paper we try to propose an automated method for cobb angle computation from radiograph (x- ray) images of scoliosis patients where the objective is to have increased reliability of spinal curvature quantification. The automatic technique mainly comprises of four steps, namely: pre-processing (denoising and filtering), region of interest (roi) identification, feature extraction and cobb angle computation from the extracted spine centre-line.svm (support vector machine) classifier is used for object identification and feature extraction. It is assumed that spine is a continuous structure instead of a series of discrete vertebral bodies with individual orientations. Several methods are used for the identification of centre-line of spine: morphological operation, gaussian blurring and polynomial fit. Now tangents are taken at every point of the extracted centre-line and thus we can evaluate the cobb angle from these sets of tangents. For the analysis of the automated diagnosis process, the approach was evaluated on the basis of 25 coronal x-ray images. Region of interest identification which is based on svm classifier is effective enough at a specificity of 100% and approximately 58% results in the extraction of centre-line from this roi were accurate where the angular variability is very less or negligible. Due to poor radiation dose and several other reasons, the endplates and edges of spine in radiograph images were blurred and hence the continuous contour based approach gives better reliability.

KEYWORDS

Scoliosis, Cobb Angle, CADx


Rate Control Based on Similarity Analysis in Multi-view Video Coding

Tao Yan1, In-Ho Ra2, Shaojie Hou1, Jingyu Feng1, and Zhicheng Wu2, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

The joint video expert group proposed a JMVM reference model for multi-view video coding, but the model did not give an effective rate control scheme. This paper proposes a rate control algorithm for multi-view video coding (MVC) based on correlation analysis. The core of the algorithm is to first divide all images into six types of coded frames according to the structural relationship between disparity prediction and motion prediction, which improve the binomial rate distortion model, and then perform the analysis between different views based on similarity analysis. The bit rate control is divided into a four-layer structure for bit rate control of multi-view video coding. Among them, the frame-layer code rate control considers the layer B frame and other factors to allocate the code rate, and the basic unit-layer code rate control uses different quantization parameters according to the content complexity of the macroblock. The average error between the actual bit rate and the target bit rate of the bit rate control algorithm can be controlled by 0.94%.

KEYWORDS

Multi-view video coding, Rate control, Bit allocation, Similarity analysis, Basic unit layer


A Hybrid Artificial Bee Colony Strategy for T-way Test Set Generation with Constraints Support

Ammar K Alazzawi1*, Helmi Md Rais1, Shuib Basri1, Yazan A. Alsariera4, Abdullateef Oluwagbemiga Balogun1,2, Abdullahi Abubakar Imam1,3, 1Department of Computer and Information Sciences, Universiti Teknologi PETRONAS, Bandar Seri Iskandar 32610, Perak, Malaysia, 2Department of Computer Science, University of Ilorin, PMB 1515, Ilorin, Nigeria, 3Department of Computer Science, Ahmadu Bello University, Zaria, Nigeria and 4Department of Computer Science, Northern Border University, Arar 73222, Saudi Arabia

ABSTRACT

t-way interaction testing is a systematic approach for exhaustive test set generation. It is a vital test planning method in software testing which generates test sets based on interaction amongst parameters to cover every possible test sets combinations. t-way strategy clarifies the interaction strength between the number of parameters. However, there are some test sets combinations that should be excluded when generating the final test set as a result of invalid outputs, impossible or unwanted test sets combinations (e.g. system requirements set). These types of set combinations are known as constraints combinations or forbidden combinations. From existing studies, several t-way strategies have been proposed to address the test set combination problem, however, generating the optimal test set is still open research being an NP-hard problem. Therefore, this study proposed a novel hybrid artificial bee colony (HABC) t-way test set generation strategy with constraints support. The proposed approach is based on a hybrid artificial bee colony (ABC) algorithm with a particle swarm optimization (PSO) algorithm. PSO was integrated as the exploratory agent for the ABC hence the hybrid nature. The information sharing ability of PSO via the Weight Factor is used to enhance the performance of ABC. The output of the hybrid ABC is a set of promising optimal test set combinations. The results of the experiments showed that HABC outperformed and yielded better test sets than existing methods (HSS, LAHC, SA_SAT, PICT, TestCover, mATEG_SAT).

KEYWORDS

Software testing, t-way testing, hybrid artificial bee colony, meta-heuristics, optimization problem.


A Novel Bit Allocation Algorithm in Multi-view Video

Tao Yan1, In-Ho Ra2, Jingyu Feng1, Linyun Huang1 and Zhicheng Wu1, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

The difficulty of rate control for Multi-view video coding(MVC) is how to allocate bits between views. The results of our previous research including the bit allocation among viewpoints uses the correlation analysis among viewpoints to predict the weight of each viewpoint. But when the scene changes, this prediction method will produce a lot of errors. Therefore, this article avoids this situation happening through scene detection. The core of the algorithm is to first divide all images into 6 types of encoded frames according to the structural relationship between disparity prediction and motion prediction, and improve the binomial rate distortion model, and then perform inter-view, frame layer, and basic unit based on the encoded information. Layer bit allocation and code rate control. In this paper, a reasonable bit rate is allocated between viewpoints based on the encoded information, and the frame layer bit rate is allocated using frame complexity and time-domain activity. Experimental simulation results show that the algorithm can effectively control the bit rate of MVC, while maintaining efficient coding efficiency, compared with the current MVC using JVT with fixed quantization parameters.

KEYWORDS

Multi-view video coding, Rate control, Bit allocation, Rate distortion model, Basic unit layer


A Semi-supervised Learning Approach to Forecast CPU Usages Under Peak Load in an Enterprise Environment

Nitin Khosla1 and Dharmendra Sharma2, 1Assistant Director – Performance Engineering, Dept. of Home Affairs, Australia and 2Professor – Computer Science, University of Canberra, Australia

ABSTRACT

The aim of a semi-supervised neural net learning approach in this paper is to apply and improve the supervised classifiers and to develop a model to predict CPU usages under unpredictable peak load (under stress conditions) in a large enterprise applications environment with several hundred applications hosted and with large number of concurrent users. This method forecasts the likelihood of extreme use of CPU because of a burst in web traffic mainly due to web-traffic from large number of concurrent users. This model predicts the CPU utilization under extreme load (stress) conditions. Large number of applications run simultaneously in a real time system in an enterprise large IT system. This model extracts features by analysing the work-load patterns of the user demand which are mainly hidden in the data related to key transactions of core IT applications. This method creates synthetic workload profiles by simulating synthetic concurrent users, then executes the key scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load (stress) conditions. We have used Expectation Maximization method with different dimensionality and regularization, attempting to extract and analyse the parameters that improves the likelihood of the model by maximizing and after marginalizing out the unknown labels. Workload demand prediction with semi-supervised learning has tremendous potential tin capacity planning to optimize and manage IT infrastructure at a lower risk.

KEYWORDS

Semi-supervised learning, Performance Engineering, Stress testing, Neural Nets, Machine learning applications


Advanced Rate Control Technologies for MVC

Tao Yan1, In-Ho Ra2, Zhicheng Wu1, Shaojie Hou1 and Jingyu Feng1, 1School of Information Engineering, Putian University, Putian, China and 2School of Computer, Information and Communication Engineering, Kunsan National University, Gunsan, South Korea

ABSTRACT

Current multi-view video coding(MVC) reference model in joint video team(JVT) does not provide efficient rate control schemes, this paper proposes rate control algorithm for multi-view video coding. Aiming at the situation that the multi-view video coding (MVC) bit rate control has not been thoroughly studied at present, based on the analysis of the shortage of rate distortion models and the characteristics of multi-view video coding in the existing video bit rate control, the paper proposes A multi-view video coding rate control algorithm based on the quadratic rate distortion (RD) model is presented. The core of the algorithm is to first divide all images into 6 types of encoded frames according to the structural relationship between disparity prediction and motion prediction, and improve the binomial rate distortion model, and then perform inter-view, frame layer, and basic unit based on the encoded information. Layer bit allocation and code rate control. In this paper, a reasonable bit rate is allocated between viewpoints based on the encoded information, and the frame layer bit rate is allocated using frame complexity and time-domain activity. Experimental simulation results show that the algorithm can effectively control the bit rate of multi-view video coding, while maintaining efficient coding efficiency, compared with the current MVC using JVT with fixed quantization parameters.

KEYWORDS

MVC(multi-view video coding), Rate control, Bit allocation, Human visual characteristics


A Hybrid Machine Learning Model with Cost-Function based Outlier Removal and its Application on Credit Rating

Aurora Y. Mu, Department of Mathmatics, Western Connecticut State University, Danbury, Connecticut, United States of America

ABSTRACT

This paper establishes a methodology to build hybrid machine learning models, aiming to combine the power of different machine learning algorithms on different types of features and hypothesis. A generic cost-based outlier removal algorithm is introduced as a step of preprocess of training data, we implement a hybrid machine learning model for a crediting problem, and experiment combination of three types of machine learning algorithms SVM, DT and LR. The new hybrid models shows improvement in performance compared to the traditional single SVM, DT, and LR. This new methodology can be further explored with other algorithms and applications.

KEYWORDS

Machine Learning, Outlier Removal, Credit Score Modelling, Hybrid Learning Model


A Non-Convex Optimization Framework for Large-Scale Low-rank Matrix Factorization

Sajad Fathi Hafshejaniy1, Saeed Vahidian3, Zahra Moaberfard2 and Bill Lin3, 1Department of Computer Science, McGill University, Montreal, Canada, 2Department of Computer Science, Apadana University, Shiraz, Iran and 3Department of Electrical and Computer Engineering, University of California San Diego, CA, USA

ABSTRACT

Low-rank matrix factorization problems such as non negative matrix factorization (NMF) can be categorized as a clustering or dimension reduction technique. The latter denotes techniques designed to find representations of some high dimensional dataset in a lower dimensional manifold without a significant loss of information. If such a representation exists, the features ought to contain the most relevant features of the dataset. Many linear dimensionality reduction techniques can be formulated as a matrix factorization. In this paper, we combine the conjugate gradient (CG) method with the Barzilai and Borwein (BB) gradient method, and propose a BB scaling CG method for NMF problems. The new method does not require to compute and store matrices associated with Hessian of the objective functions. Moreover, adapting a suitable BB step size along with a proper nonmonotone strategy which comes by the size convex parameter ηk, results in a new algorithm that can significantly improve the CPU time, efficiency, the number of function evaluation. Convergence result is established and numerical comparisons of methods on both synthetic and real-world datasets show that the proposed method is efficient in comparison with existing methods and demonstrate the superiority of our algorithms.

KEYWORDS

Barzilai and Borwein Method, Matrix factorization, Non-Convex, Nonmonotone method


Oversampling Log Messages Using A Sequence Generative Adversarial Network for Anomaly Detection and Classification

Amir Farzad and T. Aaron Gulliver, Department of Electrical and Computer Engineering, University of Victoria, PO Box 1700, STN CSC, Victoria, BC Canada

ABSTRACT

Dealing with imbalanced data is one the main challenges in machine/deep learning algorithms for classification. This issue is more important with log message data as it is typically imbalanced and negative logs are rare. In this paper, a model is proposed to generate text log messages using a SeqGAN network. Then features are extracted using an Autoencoder and anomaly detection and classification is done using a GRU network. The proposed model is evaluated with two imbalanced log data sets, namely BGL and Openstack. Results are presented which show that oversampling and balancing data increases the accuracy of anomaly detection and classification.

KEYWORDS

Deep Learning, Oversampling, Log messages, Anomaly detection, Classification


Playing Virtual Musical Drums by Mems 3D Accelerometer Sensor Data and Machine Learning

Shaikh Farhad Hossain, Kazuhisa Hirose, Shigehiko Kanaya and Md. Altaf-Ul-Amin, Computational Systems Biology Lab, Graduate School of Information Science, Nara Institute of Science and Technology (NAIST), 8916-5, Takayama, Ikoma, Nara 630-0192, Japan

ABSTRACT

Music is an entertainment part of our lives who’s the important supporting elements are musical instruments. The acoustic drum plays a vital role when a song is sung. With the era, the style of the musical instruments is changing by keeping identical tune such as electronic drum. In this work, we have developed "Virtual Musical Drums" by the combination of MEMS 3D accelerometer sensor data and machine learning. Machine learning is spreading in all arena of problem-solving and MEMS sensor is converting the large physical system to small system. In this work, we have designed eight virtual drums for two sensors. We have found 91.42% detection accuracy at simulation environment and 88.20% detection accuracy at real-time environment with 20% windows overlapping. Although system detection accuracy was satisfying but the virtual drum sound was non-realistic. Then, we implement 'multiple hit detection within a fixed interval, sound intensity calibration and sound tune parallel processing' and select 'virtual musical drums sound files' based on acoustic drum sound pattern and duration. Finally, we have completed our "Playing Virtual Musical Drums" and played the virtual drum successfully like an acoustic drum. This work has shown a different application of MEMS sensor and machine learning. It shows more, data not only for information but also music entertainment.

KEYWORDS

Virtual musical drum, MEMS, SHIMMER, support vector machines (SVM) and k-Nearest Neighbors (kNN)


Industrial Duct Fan Maintenance Predictive Approach Based on Random Forest

Mashael Maashi, Nujood Alwhibi, Fatima Alamr, Rehab Alzahrani, Alanoud Alhamid and Nourah Altawallah, Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia

ABSTRACT

When manufacturers equipment encounters an unexpected failure, or undergo unnecessary maintenance pre-scheduled plan, which happens for a total of millions of hours worldwide annually, this is time-consuming and costly. Predictive maintenance can help with the use of modern sensing technology and sophisticated data analytics to predict the maintenance required for machinery and devices. The demands of modern maintenance solutions have never been greater. The constant pressure to demonstrate enhanced cost-effectiveness return on investment and improve the competitiveness of the organization is always combined with the pressure of improving equipment productivity and keep machines running at the maximum output. In this paper, we propose maintenance prediction approach based on a machine learning technique namely random forest algorithm. The main focus is on the industrial duct fans as it is one of the most common equipment in most manufacturing industries. The experimental results show the accuracy, reliability of proposed Predictive Maintenance approach.

KEYWORDS

Predictive Maintenance, Maintenance, Random Forest, Machine Learning & Artificial Intelligence


Network Performances over Virtual Forces Algorithm For Autonomous Aerial Drone Systems

MAHAMAT Moussa Dogoumi Department of Electrical engineering University of Quebec in Trois-Rivieres Quebec, CANADA and Adam Skorek, Department of Electrical engineering, University of Quebec in Trois-Rivieres Quebec, CANADA

ABSTRACT

The FANET networks have seen some progress recently. Solutions for positioning network nodes are numerous. One in particular offers a comprehensive and interesting approach. It is VBCA. This method has the particularity of positioning the nodes in 3-D. In general, the 3-D positioning becomes an NP-Hard problem. But with VBCA positioning is relatively simple.In this paper we present an optimization of communication in a topology based on the VBCA (Virtual forces Based Clustering Algorithm). The topology is optimized by the VBCA algorithm for a better coverage area. This method is 40% more efficient than existing approaches in terms of area coverage. In the first versions of VBCA, the performances in terms of communication between nodes have not been tested. The resulting network performances are very encouraging, in terms of throughput, delay and packets loss. Therefore, this work provides a first answer to the performances in terms of network.

KEYWORDS

VBCA, FANET, Communication, Topology, Positioning, Clustering


menu
Reach Us

emailcaiml@itcse2020.org


emailcaimlconf@yahoo.com

close