Print ISSN: 1812-125X

Online ISSN: 2664-2530

Main Subjects : computer science

Medical Images Classification Using Artificial Intelligence

Tasneem Mustafa; Jamal Salahaldeen Alneamy

Journal of Education and Science, In Press
DOI: 10.33899/edusj.2022.133358.1224

In recent years, the use of computing has increased along with medical skills, and this had impressive results in terms of classification and treatment, in addition to facilitating the matter of medical personnel. This was evident during the Corona pandemic, which infected millions around the world, which. There is an urgent need for software tools to help classify the disease without the need to resort to doctors. The matter is not limited to the classification of corona disease, but it also extends to the expansion of the discovery of other diseases such as malaria, skin cancer and other diseases that afflict large numbers of people. Malaria is an infectious disease caused by the Plasmodium parasite, and according to some statistics, the total number of infections in 2019 reached about 228 million cases around the world. As for skin cancer, it is considered one of the serious diseases that affect humans because the skin plays a key role in protecting muscles and bones, and therefore cancer will affect all body functions.
CNN have made great strides in many intractable problems in image processing and classification, but their performance depends on their hyperparameters which is a tedious task if done manually. Therefore, experts in the field of deep learning aspire to improve its performance sometimes by integrating it with other algorithms such as Particle Swarm Optimization, Gray Wolf Optimization, Genetic Algorithm or firefly. All of these algorithms gave different results than the others, that is, they gave different levels of performance.

Proposing a Model for Detecting Intrusion Network Attacks Using Machine Learning Techniques

Teba Ali Jasem Ali; Muna Jawhar

Journal of Education and Science, In Press
DOI: 10.33899/edusj.2022.133867.1240

At the present time, the reliance on computers is increasing in all aspects of life, so it is necessary to protect computer networks and computing resources from complex attacks against the network. This is done by building tools, applications, and systems that detect attacks or anomalies adapting to ever-changing architectures and dynamically changing threats. Providing network security is one of the most important things in network communications, more networks grow and the more devices are added to the network, need more requirement to provide network security, a network security system is necessary to protect devices and data of network users, helps protect information shared on the network, protection of people's personal information and helps prevent users from falling victim to pirates
The goal of this paper is to build a Network Intrusion Detection System (NIDS) based on deep learning techniques such as Convolutional Neural Network (CNN), which demonstrated its efficiency in predicting, classifying, and extracting high-level features in network traffic.

Classification of Software Systems attributes based on quality factors using linguistic knowledge and machine learning: A review.

Abdulrhman Ali; Nada Nimat Saleem

Journal of Education and Science, In Press
DOI: 10.33899/edusj.2022.134024.1245

Both The functionality and the non-functionality for what the system does as well as doesn't of software systems requirements are documented in a Software Requirements Specification (SRS).
Moreover in requirements engineering, system requirements classify into several categories such as functional, quality and constraint classes.
Therefore, we evaluate several machine learning approaches as well as methodologies that mentioned in previous literature in term of automatic requirements extraction, then the classification based on methodically reviewing for many previous works on software requirements classification to assist software engineers in selecting the best requirement classification technique. Therefore, the study aim to get answer for several questions that related to: What machine learning algorithms were used for the classification process of the requirements, How these algorithms work and how they're evaluated, What methods were used for extracting features from a text, What evaluation criteria were used in comparing results and What machine learning techniques and methods were provided the highest accuracy.

Implementation of OCR using Convolutional Neural Network (CNN): A Survey

Ahmed Abdulrahman Alkaddo; Dujan Albaqal

Journal of Education and Science, In Press
DOI: 10.33899/edusj.2022.133711.1236

Recently, character recognition and deep learning have caught the attention of many researchers. Optical Character Recognition (OCR) usually takes an image of the character as input and generates the identical character as output. The important role that OCR does is to transform printed materials into digital text files. Convolutional Neural Network (CNN) is an influential model that is generous with bright results in optical character recognition (OCR). The state-of-the-art performance which exists in deep neural networks is usually used to handle frequently recognition and classification problems. Many applications are using it, for instance, robotics, traffic monitoring, articles digitization, etc. CNN is designed to adaptively and automatically learn features by using many kinds of layers (convolution layers, pooling layers, and fully connected layers). In this paper we will go through the advantages and recent usage of CNN in OCR and why it’s important to use it in handwritten and printed text recognition and what subjects we can use this technique for. Researchers are progressively using CNN for the machine-printed characters and recognition of handwritten, that is because CNN architectures are suitable for recognition tasks by inputting some images

A Review Of Clustering Methods Based on Artificial Intelligent Techniques

Baydaa ibraheem Khaleel

Journal of Education and Science, 2022, Volume 31, Issue 2, Pages 69-82
DOI: 10.33899/edusj.2022.133092.1218

Due to the development in various areas of life, the development of the Internet, and the presence of many dataset, and in order to obtain useful information from the rapidly increasing volumes of digital data, there must be theories and computational tools to help humans extract the useful information they need from this data. Large data is collected from many different services and resources. Clustering is one of the most basic and well-known methods of data mining and extraction and obtaining useful information. The technique of recognizing natural groups or clusters within several datasets based on some measure of similarity is known as data clustering. Many researchers have introduced and developed many clustering algorithms based on the different methods of artificial intelligence techniques. Finding the right algorithms greatly helps in organizing information and extracting the correct answer from different database queries. This paper provides an overview of the different clustering methods using artificial intelligence and finding the appropriate clustering algorithm to process different data sets. We highlight the best-performing clustering algorithm that gives effective and correct clustering for each data set.

Improving Security Using Cryptography Based on Smartphone User Locations

anfal mahmood; Ahmed S Nori

Journal of Education and Science, 2022, Volume 31, Issue 2, Pages 94-104
DOI: 10.33899/edusj.2022.133190.1222

Smartphones have become widely employed in a range of fields as a result of substantial developments in communication technology, distribution, and the development of numerous types of smart mobile devices. The goal of this research is to secure information sent over mobile phone networks. In this paper, we propose using cryptography to create a more secure application for transmitting confidential information, using encryption to improve security, and depending on the location of the mobile phone user's coordinates, obtained via GPS, to increase security. The XOR process was used between coordinates, the idea was new, the application was implemented, and good results were obtained. The process of converting text into unreadable text is known as ciphering, and in order to achieve it in this paper the Twofish algorithm was used to encrypt confidential information. When sending the coordinates, the RSA algorithm was used to encrypt them as for the Twofish algorithm, the coordinates serve as a key. We conclude that the proposed system used in this study achieved a high level of security.

Detecting A Medical Mask During The COVID-19 Pandemic Using Machine Learning: A Review Study

Mohammed Abdullsattar Abdullghani mzeri; Laheeb M. Ibrahim

Journal of Education and Science, 2022, Volume 31, Issue 2, Pages 55-68
DOI: 10.33899/edusj.2022.133181.1221

Since the emergence of the COVID-19 pandemic, there have been government instructions to citizens to wear a medical mask in crowded places and institutions to prevent or reduce the spread of the pandemic, as the most common method of transmission of COVID-19 is (coughing or sneezing), the spread of infection of this disease can be reduced by wearing a mask Medical, and to ensure that everyone wears a mask is not easy.
In this paper, we try to study research in the field of identifying the medical mask and the machine learning algorithms used to build a system capable of detecting the medical mask in faces through images and video in real time. We also explain in this research an overview of the importance of machine learning and deep learning methods, especially Convolutional Neural Network (CNN) and the basic steps for creating the system We reveal the medical mask, and we highlight the methods and stages of building the model with its accuracy and get acquainted with the datasets used in building the model and the size of the data set (number of images) used in the training and testing phase of the model and the mechanism by which The researcher worked out to build his own system.


Awos Khazal Ali

Journal of Education and Science, 2022, Volume 31, Issue 1, Pages 147-153
DOI: 10.33899/edusj.2022.132790.1211

Packets queuing and scheduling in network routers is a key point of overall network performance. Many applications, especially applications require Quality of Services (QoS), need techniques to pass their packets throughout routers and control and/or avoid congestions in highly congested routes. Therefore, many Active Queue Management (AQM) algorithms have been developed to avoid or control congestion in routers and provide fairness among traffic flows. This paper provides an extensive evaluation performance analysis of three well-known AQM algorithms including RED, REM and traditional Drop-Tail with QoS application requirements. The evaluation performance is conducted by employing network simulator version 2 (NS2). The network performance is measured with Voice over Internet Protocol (VoIP) traffic and three performance metrics including, throughput, latency, and PSD (Probability of Sequential Drop). The analysis shows no AQM algorithm achieve all the VoIP QoS requirements, A new AQM is needed to fulfil QoS requirements and manage queue to handle unresponsive flows.

Software Development Effort Estimation Techniques: A Survey

farah basil alhamdany; laheeb Mohammad Ibrahim

Journal of Education and Science, 2022, Volume 31, Issue 1, Pages 80-92
DOI: 10.33899/edusj.2022.132274.1201

Software Effort Estimation (SEE) is used in accurately predicting the effort in terms of (person–hours or person–months). Although there are many models, Software Effort Estimation (SEE) is one of the most difficult tasks for successful software development. Several SEE models have been proposed. However, software effort overestimation or underestimation can lead to failure or cancellation of a project.
Hence, the main target of this research is to find a performance model for estimating the software effort through conduction empirical comparisons using various Machine Learning (ML) algorithms. Various ML techniques have been used with seven datasets used for Effort Estimation. These datasets are China, Albrecht, Maxwell, Desharnais, Kemerer, Cocomo81, Kitchenham, to determine the best performance for Software Development Effort Estimation. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R-Squared were the evaluation metrics considered. Results and experiments with various ML algorithms for software effort estimation have shown that the LASSO algorithm with China dataset produced the best performance compared to the other algorithms.

Analytical Study of Traditional and Intelligent Textual Plagiarism Detection Approaches

Ayob Ali; Alaa Yaseen Taqa

Journal of Education and Science, 2022, Volume 31, Issue 1, Pages 8-25
DOI: 10.33899/edusj.2021.131895.1192

The Web provides various kinds of data and applications that are readily available to explore and are considered a powerful tool for humans. Copyright violation in web documents occurs when there is an unauthorized copy of the information or text from the original document on the web; this violation is known as Plagiarism. Plagiarism Detection (PD)can be defined as the procedure that finds similarities between a document and other documents based on lexical, semantic, and syntactic textual features. The approaches for numeric representation (vectorization) of text like Vector Space Model (VSM) and word embedding along with text similarity measures such as cosine and jaccard are very necessary for plagiarism detection. This paper deals with the concepts of plagiarism, kinds of plagiarism, textual features, text similarity measures, and plagiarism detection methods, which are based on intelligent or traditional techniques. Furthermore, different types of traditional and algorithms of deep learning for instance, Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) are discussed as a plagiarism detector. Besides that, this work reviews many other papers that give attention to the topic of Plagiarism and its detection.

Intelligence System for Multi-Language Recognition

Fawziya Ramo; Mohammed Naif Kannah

Journal of Education and Science, 2022, Volume 31, Issue 1, Pages 93-110
DOI: 10.33899/edusj.2022.132223.1200

Language classification systems are used to classify spoken language from a particular phoneme sample and are usually the first step of many spoken language processing tasks, such as automatic speech recognition (ASR) systems Without automatic language detection, spoken speech cannot be properly analyzed and grammar rules cannot be applied, causing failures Subsequent speech recognition steps. We propose a language classification system that solves the problem in the image field, rather than the sound field. This research identified and implemented several low-level features using Mel Frequency Cepstral Coefficients, which extract traits from speech files of four languages (Arabic, English, French, Kurdish) from the database (M2L_Dataset) as the data source used in this research.
A Convolutional Neuron Network is used to operate on spectrogram images of the available audio snippets. In extensive experiments, we showed that our model is applicable to a range of noisy scenarios and can easily be extended to previously unknown languages, while maintaining classification accuracy. We released our own code and extensive training package for language classification systems for the community.
CNN algorithm was applied in this research to classify and the result was perfect, as the classification accuracy reached 97% between two languages if the sample length was only one second, but if the sample length was two seconds, the classification accuracy reached 98%. While the classification among three languages, the classification accuracy reached 95% if the sample length was only one second, but if the sample length was two seconds, the classification accuracy reached 96%.

A Suggested System For Palmprint Recognition Using Curvelet Transform And Co-Occurrence Matrix.

meaad mohammed alhadidi

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 65-76
DOI: 10.33899/edusj.2021.130870.1176

The main purpose of this paper is to create a palmprint recognition system (PPRS) that uses the curvelet transform and co-occurrence matrix to recognize a hand's palmprint.
The suggested system is composed of several stages: in the first stage, the region of interest (ROI) was taken from a palmprint image, then in the second stage, the curvelet transform was applied to the (ROI) to get a blurred version of the image, and finally, unsharp masking process and sobel filtering were done for edge detection. The third stage involves feature extraction using a co-occurrence matrix to obtain 16 features, while the fourth stage inclusion is the training and testing of the suggested approach. The algorithm ACO (ant colony optimization) has been adopted to evaluate the shortest path to the goal.
CASIA PalmprintV dataset of 100 people (60 male and 40 female) was used in proposed work to rate the performance of the proposed system. ARR and EER metrics have been adopted to assess the performance of the proposed system.
The experimental results showed a very high recognition rate (ARR) that reaches 100% for the right hand of a male and the left hand of a female. The overall accuracy rate (ARR) reaches 98.5% and EER equals 0.015.

A New Method for Head Direction Estimation based on Dlib Face Detection Method and Implementation of Sine Invers Function

arqam Al-Nuaimi; Ghassan Mohmmed

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 114-124
DOI: 10.33899/edusj.2021.130962.1181

The detection and tracking of head movements have been such an active area of research during the past years. This area contributes highly to computer vision and has many applications of computer vision. Thus, several methods and algorithms of face detection have been proposed because they are required in most modern applications, in which they act as the cornerstone in many interactive projects. Implementation of the detected angles of the head or head direction is very useful in many fields, such as disabled people assistance, criminal behavior tracking, and other medical applications. In this paper, a new method is proposed to estimate the angles of head direction based on Dlib face detection algorithm that predicts 68 landmarks in the human face. The calculations are mainly based on the predicated landmarks to estimate three types of angles Yaw, Pitch and Roll. A python program has been designed to perform face detection and its direction. To ensure accurate estimation, the particular landmarks were selected, such that, they are not affected by the movement of the head, so, the calculated angles are approximately accurate. The experimental results showed high accuracy measures for the entire three angles according to real and predicted measures. The sample standard deviation results for each real and calculated angle were Yaw (0.0046), Pitch (0.0077), and Roll (0.0021), which confirm the accuracy of the proposed method compared with other studies. Moreover, the method performs faster which promotes accurate online tracking.

Ransomware Detection System Based on Machine Learning

Omar Shamil Ahmed; Omar Abdulmunem Ibrahim Al-Dabbagh

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 86-102
DOI: 10.33899/edusj.2021.130760.1173

In every day, there is a great growth of the Internet and smart devices connected to the network. On the other hand, there is an increasing in number of malwares that attacks networks, devices, systems and apps. One of the biggest threats and newest attacks in cybersecurity is Ransom Software (Ransomware). Although there is a lot of research on detecting malware using machine learning (ML), only a few focuses on ML-based ransomware detection. Especially attacks targeting smartphone operating systems (e.g., Android) and applications. In this research, a new system was proposed to protect smartphones from malicious apps through monitoring network traffic. Six ML methods (Random Forest (RF), k-Nearest Neighbors (k-NN), Multi-Layer Perceptron (MLP), Decision tree (DT), Logistic Regression (LR), and eXtreme Gradient Boosting (XGB)) are applied on CICAndMal2017 dataset which consists of benign and various kinds of android malware samples. A 603288 benign and ransomware samples were extracted from this collection. Ransomware samples are collected from 10 different families. Several types of feature selection techniques have been used on the dataset. Finally, seven performance metrics were used to determine the best one of feature selection and ML classifiers for ransomware detection. The experiments results imply that DT and XGB outperforms other classifiers with best detection accuracy are more than (99.30%) and (99.20%) for (DT) and (XGB) respectively.

Data Stream Mining Between Classical and Modern Applications: A Review

Ammar Thaher Yaseen Al Abd Alazeez Thaher Al Abd Alazeez

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 30-43
DOI: 10.33899/edusj.2021.130093.1158

Data mining (DM) is an amazing innovation with incredible potential to help organizations centre on the main data in the information they have gathered about the conduct of their clients and expected clients. It finds data inside the information that questions and reports can't viably uncover. For the most part, DM is the way toward examining information from alternate points of view and summing up it into helpful data - data that can be utilized to expand income, reduces expenses, or both. There are four types of DM: 1) Classification and regression, 2) Clustering, 3) Association Rule Mining, and 4) Outlier/Anomaly Detection. Tending to the velocity part of Big Data (BD) has as of late pulled in a lot of revenue in the investigation local area because of its critical effect on information from pretty much every area of life like medical services, financial exchange, and interpersonal organizations, and so on. Many research works have investigated this velocity issue through mining data streams. Most existing data stream mining research centres on adjusting the primary classifications of approaches, methods and methods for static information to the dynamic information circumstance. This research explores widely the current writing in the field of data stream mining and recognizes the fundamental preparing units supporting different existing methods. This study not simply benefits examiner to make strong assessment subjects and separate gaps in the field yet moreover helps specialists for DM and BD application structure headway.

Human Activity Recognition: literature Review

mais irreem atheed; Dena Rafaa Ahmed; Rashad Adhed Kamal

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 12-29
DOI: 10.33899/edusj.2021.130293.1162

Human activity recognition has an important role in the interaction between human and human relationships because it provides information about a person's identity, personality, activities, psychological state, and health, all this information is difficult to extract due to the difficulty of a person's ability to identify the activities of another person and is considered one of the basic research topics in the scientific fields in the field of computer vision and machine learning. the purpose of human activity recognition (HAR) is to identify the different human activities throw monitoring and register the human activates and the various surrounded environment, by using computers, the human activity recognition researches which depending on visions is the basics of lots of applications even video monitoring or health care and security monitoring and the interaction between the human and the computers.
In this research, a review of the newest development in the human activity recognition branch have been studied, and the different ways to recognize the human actions, an important detail have been shown to preview the HAR researches and the methodologies used to represent the human activates and its classifications, to provide an overview of the HAR methods and comparing them

Detection of citrus diseases using a fuzzy neural network

Huda Saad Taher; Baydaa I. Khaleel

Journal of Education and Science, 2021, Volume 30, Issue 5, Pages 125-135
DOI: 10.33899/edusj.2021.130928.1179

The objective is to use AI techniques to build a citrus image recognition system and to produce an integrated program that will assist plant protection professionals in determining whether the disease is infected and early detection for the purpose of taking the necessary preventive measures and reducing its spread to other plants. In this research, the RBF and FRBF networks were used and applied to 830 images, to detect whether citrus fruits were healthy or ill. At first, the preprocessing of these images was done, and they were reduced to 250 x 250 pixels, and the features were extracted from them using the co-occurrence matrix method (GLCM) after setting the gray level at 8 gradients and 1 pixel distance, 21 statistical features were derived, and then these features were introduced to RBF after determine the number of input layer nodes by 21 , 20 for the hidden layer and 1 node for output layer, the centers were randomly selected from the training data and the weights were also randomly selected and trained using the Pseudo Inverse method. The RBF network was hybridized with the fuzzy logic using the FCM method, the fuzziness parameter = 2.3 was selected, and a new network called FRBF was acquired. These networks were trained and tested in training data (660 images) and testing (170 images) for citrus fruits. The detection rate was then calculated, and the results showed that the (FRBF) had a higher accuracy of 98.24% compared to RBF of 94.71%.

Text dependent speaker identification system based on deep learning

Qasim Sadiq Mahmood; Yusra Faisal Al-Irahyim

Journal of Education and Science, 2021, Volume 30, Issue 4, Pages 141-160
DOI: 10.33899/edusj.2021.130144.1161

Speaker identification techniques are one of those most advanced modern technologies and there are many different systems had been developed, from methods that used to extract characteristics and classification. The applications of Speech identification are quite difficult and requires modern technologies with a large number of audio samples and resources.
In this research, the system of speaker identification had been designed based on a text (the word or sentences are pre-defined) which give the system the capability to identify the speaker in the least time, number of training samples and resources. The system consists four main parts, the first one is to create audio databases. In the study, two audio databases were relied upon, the first being a database (QS- Dataset) and the second database (audioMNIST_meta). The databases were processed and configured in a way that was explained in the body of the research later. The second part of the research is to extract the characteristics through the pitch coefficients algorithm, while the third part is the use of the neural network as a classifier. And the last part of the research is to verify the work and results of the system.
The test results showed the ability of the MNN network to deal with the smallest number of data, as it achieved a percentage of 100%. As for large data, it ranged from 80% to 81%. Unlike CNN network, the results were not good for the few data, from 60% to 76%, and with large data it was The results are excellent, from 91% to 96%.

Design and Implementation of an Electronic System of Salaries: (Nineveh Investment Commission as a Model)

Mohamed Qusay Alchalabi; Mafaz Mohsin Alanezi

Journal of Education and Science, 2021, Volume 30, Issue 4, Pages 106-124
DOI: 10.33899/edusj.2021.129618.1146

Electronic systems are considered one of the most important pillars in the development of the work of any institution, especially the systems related to the administrative and financial aspects.
In this research, an electronic system for salaries for the Nineveh Investment Commission (NIC) was designed and implemented model using the language (C#), A central database was built using a Database Management System (SQL), This system was based on a local wireless network to share work by adopting (Client/Server) model to connect the computers, the proposed system includes very important features such as the open system data that enables the user to add and amend the percentages of the basic and secondary salary components, automatic calculation of the salary by specifying the employee service specifications and the certificate obtained, fixed and variable allocations and deductions, calculating all leave, Determining annual bonuses and promotions and organizing them to makes it easy for the user to know who is eligible, update and calculate them, in this system several levels of system users were built. A report was added for the employee's last salary certificate with detailed reports on salaries and the system was strengthened with the feature of backing up to prevent the database from Damage and referred to at any time.
The system was tested on real data to issuing salary reports for three months. As the system met with great desire and reliability in its use by conducting a questionnaire to measure the usability of the system on the specialists.

Speaker Recognition: Progression and challenges

Yusra Faisal Al-Irahyim; Qasim Sadiq Mahmood

Journal of Education and Science, 2021, Volume 30, Issue 4, Pages 59-68
DOI: 10.33899/edusj.2021.129802.1150

Speaker recognition is one of the field topics widely used in the field of speech technology, many research works has been conducted and little progress has been made in the past five to six years, and due to the advancement of deep learning techniques in most areas of machine learning, it has been replaced previous research methods in speaking recognition and verification. The topic of deep learning is now the most advanced solution to verifying and identifying a speaker's identity. The algorithms used are (x-vectors) and (i-vectors) which are considered the baseline in modern work. The aim of this study is to review deep learning methods applied in identifying speakers and tasks for validating older solutions (Gaussian mixture model, Gaussian mixture super vector model and i-vector model) to new solutions using deep neural networks (deep belief network, deep corrective learning network). ) As well as the types of metrics to verify the speaker (cosine distance, probabilistic linear discrimination analysis) as well as the databases used for neural network training (TIMIT, VCTK, VoxCeleb2, LibriSpeech).

Real-Time Monitoring System Based on Li-Fi Network Technology in Healthcare

Yasser Nozad; Ayad Nozad Mohammedtawfiq

Journal of Education and Science, 2021, Volume 30, Issue 4, Pages 193-200
DOI: 10.33899/edusj.2021.130106.1159

Patients at healthcare facilities require a long-term continuous healthcare monitoring system to keep track of their vital signs. Because it deals with human life, this system must be safe, trustworthy, and ensure that it does not interfere with available radio frequencies or sensitive electronic devices such as MRI (magnetic resonance imaging). This paper introduces a patient monitoring system in intensive care that used Li-Fi technology, designed to help enhance patient care and boost doctor’s clinical results. This robust approach timely collects patient data and integrates securely within the hospital IT framework feeding information to physicians, allowing them to make informed clinical decisions. The system used real-time software which displays the data from different locations for assessment. It was successfully tested in the laboratory. Some measurements are discussed, which compare the received pulses to the modules line of sight (LOS) output channel to correlate the transmitted channels. In this work, experimental analysis and measurements are performed to check the efficiency of the proposed concept.

Significance of Enhancement Technique In Segmentation of Image and Signal: A Review of the literature

ghada mohammad tahir mohammad tahir kasim; Ashraf AL thanoon; Haleema Solayman

Journal of Education and Science, 2021, Volume 30, Issue 4, Pages 15-27
DOI: 10.33899/edusj.2021.129161.1134

Significance of Enhancement Technique In Segmentation of Image and Signal: A Review of the literature

From the last 70 years, there is continuous development in the field of digital image processing such as geology, biology as well as in medical fields. Solving many problems in the case of numerous application image processing plays an important role. Recently, wireless communication has been a dominant medium. When a signal or image is transmitted via the wireless environment, the quality of the image or signal gets degraded. It is the biggest issue. This happens because of acquisition and color space conversion. Hence, priority is given to enhance the quality of the image or signal. Enhancement is the process responsible to enhance the quality of the signal. In this paper, we focused on various enhancement techniques for image and signal enhancement. Furthermore, this study put down the result for various enhancement techniques for improvement in the image. Theoretically, the signal enhancement was discussed shortly.

Investigating indirect impacts of TCP connection on IMS network

Ali Abdulrazzaq K.

Journal of Education and Science, 2021, Volume 30, Issue 2, Pages 186-195
DOI: 10.33899/edusj.2021.130133.1160

The IP Multimedia Subsystem (IMS) is recently expected to have the major architecture framework to be involved in the Next Generation Network. IMS works to bridge the multimedia communication among variety of applications over Internet. IMS bears its multimedia signals and stream through different means of transport protocols; TCP, UDP, and SCTP. TCP is a connection-oriented protocol that provides reliable data delivery and congestion control. To setup connection with TCP, IMS entities requires extra operations to complete, that operation process called (worker process) costs the multimedia server extra overload and delay. This paper investigates the indirect impacts of TCP connection that resulted from the Call Session Control Function (CSCF) servers when it deal with video communication. Two parameters are evaluated in experiment which are CPU usage and response time with two different scenarios. The experimental shows that outbound scenario performs better than the inbound scenario due to the extra operations required to setup new TCP connection for inbound

Image Fusion by Shift Invariant Discrete Wavelet Transform for Remote Sensing Applications

Abdalrahman Ramzi Qubaa

Journal of Education and Science, 2021, Volume 30, Issue 2, Pages 53-66
DOI: 10.33899/edusj.2020.128261.1109

The fusion technique of the spectral bands captured by the sensors carried onboard satellites is one digital processing method for extracting information and detecting ground targets. Image fusion - also known as pan-sharpening-provides the necessary means to combine many images into a single composite image that is suitable in visual interpretation processes or in digital interpretation. The principal objective of this study is to find the best suitable algorithms for obtaining integrative information from several separate images in one combined image. Based on the above, a special software system was designed to implement and test the fusion methods used in remote sensing applications by selecting and applying a Shift Invariant Wavelet Transform (SIWT) method to the remote sensing images and then comparing with four other different image fusion algorithms. Two objective mathematical methods were also used to measure the amount of shared information obtained in the images resulting from the fusion, as well as using the visual and Near-Infrared images of the new Sentinel-2 European satellite for a part of Nineveh province as experimental images. The results showed a preference of the wavelet transform method over the other fusion methods for the remote sensing images.

Adopting Text Similarity Methods and Cloud Computing to Build a College Chatbot Model

Zaid Mundher; Wissam Khalf Khater; Laith Mohammed Ganeem

Journal of Education and Science, 2021, Volume 30, Issue 1, Pages 117-125
DOI: 10.33899/edusj.2020.127244.1079

A chatbot is a computer program which is designed to interact with users and answer questions. Nowadays, chatbots are one of the most common systems that are used in many fields and by different companies to achieve different tasks. Cloud computing is gaining increasing interest. A myriad of fields and applications have been developed based on cloud computing.
In this paper, a college chatbot was developed and implemented to assist students to interact with their college and ask questions related to faculty, activities, exams, admission, amongst other tasks. Text similarity algorithms were adopted to achieve the proposed system. More specifically, cosine similarity and jaccard similarity algorithms were used to find the closest question in the dataset. Firebase real-time database, which is one of the Google cloud services, was used as a connector channel between users and the chatbot server.
Experiments were conducted to evaluate the performance of cosine similarity and jaccard similarity methods, and to compare the results of both. In addition, real-time database was also evaluated as a chatbot connecter channel.

AEPRD: An Enhanced Algorithm for Predicting Results of Orthodontic Operations

Ammar Thaher Yaseen Al Abd Alazeez Thaher Al Abd Alazeez

Journal of Education and Science, 2021, Volume 30, Issue 1, Pages 173-190
DOI: 10.33899/edusj.2020.127785.1094

The face is the most critical component which is clear on first sight for an individual. Delicate tissue of the face alongside the fundamental dentoskeletal tissues portrays the facial attributes of a person. Social affirmation, mental well-being, and self-esteem of an individual are related to physical appearance. Strikingly, facial properties are regularly packed in profile. Orthodontic assurance and treatment orchestrating are continuously being established on profiles rather than basically on Angle's concept of molar relationship. It was seen that particular skeletal exact guidelines, proportion of constitution of the delicate tissue, and facial solid position can affect the assessment of the profile.
One of the uncommon challenges in orthodontics is the treatment orchestrating and the leading body of orthognathic careful cases. These cases require a mix of both orthodontics and orthognathic medical procedure to accomplish an even impediment, appropriate function, and agreeable facial feel. Early analyze of malocclusion is exceptionally helpful to get legitimate teeth straight. Thusly, in this paper we built up a straightforward PC supported program that could help foreseeing teeth impediment. In other word, we take an image of individual and order it into one of the three primary types Class I, Class II, and Class III and predict the after all treatments of Class II and Class III. This study gives information which can be used in treatment orchestrating by authorities, for instance, orthodontists, prosthodontists, plastic specialists, and maxillofacial experts, who have the ability to change the delicate tissue facial highlights.

Refactoring for software maintenance: A Review of the literature

Rasha Alsarraj; atica Altaie

Journal of Education and Science, 2021, Volume 30, Issue 1, Pages 89-102
DOI: 10.33899/edusj.2020.127426.1085

One of the techniques to increase the value of the software quality is refactoring - the set of activities for code enhancement through altering inner structure and not altering outer behavior of code. It is a technique to clean-up the source code that decreasing the opportunities of code faults. Refactoring can be defined as one of the most significant practices for maintaining the advanced software systems. It has been indicated by the empirical studies that refactoring has positive effect on maintainability and understandability of the software systems. This study introduces a literature review of 22 researches that study and summarize the influence of refactoring and their effect on the attributes of software quality specially maintainability. Through the review, the study sums the following points: (1) applying refactoring activities will increase the values of some attributes of quality like Understandability and maintainability. (2)There are several factors that affect reconstruction activities, including cohesion, coupling, hiding of information and encapsulation, (3) Refactoring helps to improve the source code without changing the behavior of the program, (4) refactoring activates can be applied many times to the source code.

Employing Cloud Technologies in E-Learning Systems: University Students and Teachers’ Ability in Storing Information in “Cloud”: A “Google Classroom” Study

luqman Abdulrahman qader

Journal of Education and Science, 2020, Volume 29, Issue 4, Pages 245-258
DOI: 10.33899/edusj.2020.127247.1080

E-learning today has a significant impact on learning, due to its ease of accessibility and the fact that it does not take into account geography, politics, or narrow economic interests. This significance gains a special status, especially when having doubts regarding the possibility of a final settlement of the pandemic Coved-19 in a short period. Education using digital technologies allows students to expand their access to knowledge resources and to special skills which support the curriculum, as well as carrying using important features such as continuous assessments that enable them to advance in the field of research and to develop their ideas, and perhaps provide more opportunities to extend their knowledge and stimulate critical thinking that is formed by allowing students to gain knowledge and reach conclusions by themselves. With the spread of this huge number of smartphones, also of the availability of internet service at any time and place allows digital services to go beyond many boundaries to share information. “Cloud” computing technology provides optimal solutions for setting an effective infrastructure that allows researchers, teachers, and students to access services from anywhere and by using any kind of digital devices connected to the Internet to get valuable resources and services and to take advantage of the capabilities and functions provided by these modern environments. This contributes to providing the tools for supporting learning, teaching, and cooperative work. “Cloud” computing gives students and teachers a more convenient and effective learning experience.