Image Processing Projects -2019

Spread the love

This blog post provides best and the latest image processing projects based on Latest scientific journals

License Plate Localization in Unconstrained Scenes Using  RCNN

In this Project, we propose a  deep learning-based method to locate license plates in unconstrained scenes, especially for special license plates such as fouling, occlusion, and so on. A deep network consisting of Regional convolutional neural network (CNN) and recurrent neural network is designed.The experimental results show that the proposed method not only locates license plates of different countries accurately but also be robust to scenes of illumination variation, noise distortion, and blurry effects. 

Reference Paper
License Plate Localization in Unconstrained Scenes Using a Two-Stage CNN-RNN
Published in: IEEE Sensors Journal ( Volume: 19 , Issue: 13 , July1, 1 2019 )
https://ieeexplore.ieee.org/document/8643978

Real time Face-Detection and Emotion Recognition Using Viola jones algorithm and Deep learning

In this project, the problem of facial expression is addressed, which contains two different stages: 1. Face detection, 2. Emotion Recognition. For the first stage, we used viola jones algorithm to accurately detect the boundaries of the face, with minimum residual margins. The second stage, leverages Deep learning architecture. The experimental results clearly Shows that our proposed model outperforms the other method.

Reference Paper (IEEE 2019)
Realtime Face-Detection and Emotion Recognition Using MTCNN and miniShuffleNet V2
Published in: 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI)
https://ieeexplore.ieee.org/document/8734924

Driver Drowsiness Detection using EEG and Camera

Driver drowsiness detection is a key technology that can prevent fatal car accidents caused by drowsy driving. The present work proposes a driver drowsiness detection algorithm based on Camera and EEG headset .

Reference Paper (IEEE 2019)
Heart Rate Variability-Based Driver Drowsiness Detection and Its Validation With EEG
Published in: IEEE Transactions on Biomedical Engineering ( Volume: 66 , Issue: 6 , June 2019 )
https://ieeexplore.ieee.org/document/8520803

Robust Human Activity Recognition Using Multimodal Feature-Level Fusion

Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring.This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera  

Reference Paper IEEE 2019
Robust Human Activity Recognition Using Multimodal Feature-Level Fusion
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8701429

Real-Time Smart Attendance System using Face Recognition Techniques

The management of the attendance can be a great burden on the teachers if it is done by hand. To resolve this problem, smart and auto attendance management system is being utilized. But authentication is an important issue in this system. The smart attendance system is generally executed with the help of biometrics. Face recognition is one of the biometric methods to improve this system. Being a prime feature of biometric verification, facial recognition is being used enormously in several such applications, like video monitoring and CCTV footage system, an interaction between computer & humans and access systems present indoors and network security. By utilizing this framework, the problem of proxies and students being marked present even though they are not physically present can easily be solved. The main implementation steps used in this type of system are face detection and recognizing the detected face.This paper proposes a model for implementing an automated attendance management system for students of a class by making use of face recognition technique, by using Eigenface values, Principle Component Analysis (PCA) and Convolutional Neural Network (CNN).

Reference Paper IEEE 2019
Real-Time Smart Attendance System using Face Recognition Techniques
Published in: 2019 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence)
https://ieeexplore.ieee.org/document/8776934

Deep Unified Model For Face Recognition Based on Convolution Neural Network and Edge Computing

Deep learning and edge computing are the emerging technologies, which are used for efficient processing of huge amount of data with distinct accuracy. In this world of advanced information systems, one of the major issues is authentication. Several techniques have been employed to solve this problem. Face recognition is considered as one of the most reliable solutions. Usually, for face recognition, scale-invariant feature transforms (SIFT) and speed ed up robust features (SURF) have been used by the research community. This project proposes an algorithm for face detection and recognition based on convolution neural networks (CNN), which outperform the traditional techniques. In order to validate the efficiency of the proposed algorithm, a smart classroom for the student’s attendance using face recognition has been proposed.

Reference Paper IEEE 2019
Deep Unified Model For Face Recognition Based on Convolution Neural Network and Edge Computing
IEEE Access
Year: 2019 | Volume: 7 | Journal Article | Publisher: IEEE
https://ieeexplore.ieee.org/document/8721062

Automated Breast Ultrasound Lesions Detection Using Convolution Neural Networks

Breast lesion detection using ultrasound imaging is considered an important step of computer-aided diagnosis systems. Over the past decade, researchers have demonstrated the possibilities to automate the initial lesion detection. However, the lack of a common dataset impedes research when comparing the performance of such algorithms. This paper proposes the use of deep learning approaches for breast ultrasound lesion detection and investigates three different methods: a Patch-based LeNet, a U-Net, and a transfer learning approach with a pretrained FCN-AlexNet.

Reference Paper IEEE 2019
Automated Breast Ultrasound Lesions Detection Using Convolutional Neural Networks
Published in: IEEE Journal of Biomedical and Health Informatics ( Volume: 22 , Issue: 4 , July 2018 )
https://ieeexplore.ieee.org/document/8003418

Vehicle Theft Tracking, Detecting And Locking System Using Open CV

n this face recognition and detection in real time by using Open CV Python Module. Face recognition may solve many problem. Vehicle locking & detection system (or) device is installed in the vehicle. By using mobile application to recognize the face and compares face within their data to checked whether, that user is an automated owner (or) not. If the conditions is true, unlock the vehicle. Otherwise the vehicle has been locked. If any person trying to break (or) damaging the device, it will automatically sending the message and call to the responsible person. This system secures vehicle from theft as well as allowing users to view the theft details thereby highlight the theft details and saving the data in a USB drive. 

Reference Paper IEEE 2019
Vehicle Theft Tracking, Detecting And Locking System Using Open CV
Published in: 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS)
https://ieeexplore.ieee.org/document/8728460

A Static Hand Gesture and Face Recognition System for Blind People

This paper presents a recognition system, which can be helpful for a blind person. Hand gesture recognition system and face recognition system has been implemented in this paper using which various tasks can be performed. Dynamic images are being taken from a dynamic video and is being processed according to certain algorithms. In the Hand gesture system Skin color detection has been done in YCbCr color space and to discover hand convex defect character point of hand is used where different features like fingertips, angle between fingers are being extracted. According to gesture Recognized, various tasks can be performed like turning on the fan or lights. While in face recognition, Haar Cascade Classifiers and LBPH recognizer are being used for face detection and recognition respectively. With the help of OpenCV, The research has been implemented. Various hand gestures and human faces have been detected and identified using this system. 

Reference Paper IEEE 2019
Published in: 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN)
https://ieeexplore.ieee.org/document/8711706

Retinal Vessels Segmentation Based on Dilated Multi-Scale Convolutional Neural Network

Accurate segmentation of retinal vessels is a basic step in diabetic retinopathy (DR) detection. Most methods based on deep convolutional neural network (DCNN) have small receptive fields, and hence they are unable to capture global context information of larger regions, with difficult to identify pathological. The final segmented retina vessels contain more noise with low classification accuracy. Therefore, in this paper, we propose a DCNN structure named as D-Net. In the encoding phase, we reduced the loss of feature information by reducing the downsampling factor, which reduced the difficulty of tiny thin vessels segmentation. We use the combined dilated convolution to effectively enlarge the receptive field of the network and alleviate the “grid problem” that exists in the standard dilated convolution. In the proposed multi-scale information fusion module (MSIF), parallel convolution layers with different dilation rates are used, so that the model can obtain more dense feature information and better capture retinal vessel information of different sizes. In the decoding module, the skip layer connection is used to propagate context information to higher resolution layers, so as to prevent low-level information from passing the entire network structure.

Reference Paper IEEE 2019
Retinal Vessels Segmentation Based on Dilated Multi-Scale Convolutional Neural Network
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8736207

Intelligent monitoring of indoor surveillance video based on deep learning

The smart monitoring system equipped with intelligent video analytics technology can monitor as well as pre-alarm abnormal events or behaviours, which is a hot research direction in the field of surveillance. This paper combines deep learning methods, using the state-of-the-art framework for instance segmentation, called Mask R-CNN, to train the fine-tuning network on our datasets, which can efficiently detect objects in a video image while simultaneously generating a high-quality segmentation mask for each instance. The experiment show that our network is simple to train and easy to generalize to other datasets, and the mask average precision is nearly up to 98.5% on our own datasets.

Reference Paper IEEE 2019
Published in: 2019 21st International Conference on Advanced Communication Technology (ICACT)
https://ieeexplore.ieee.org/document/8701964

Review on Multi-Model Medical Image Fusion

Image fusion seems to be the most promising area in image processing. It plays a pivotal role in different applications, namely medical diagnosis, object detection and recognition, navigation, military, civilian surveillance, robotics, satellite imaging for remote sensing. The process of image fusion aims to integrate two or more images into a single image, which consists of more useful information when compared with each of the source images without introducing any artefacts. In this review paper, three aspects are considered: image fusion methods on spatial domain and transform domain methods, Image fusion rules on transform domain method and image fusion metrics. 

Reference Paper IEEE 2019
Review on Multi-Model Medical Image Fusion
Published in: 2019 International Conference on Communication and Signal Processing (ICCSP)
https://ieeexplore.ieee.org/document/8697906

A Robust Iris Segmentation Scheme Based on Hough transform and Unet

Iris segmentation plays an important role in the iris recognition system, and the accurate segmentation of iris can lay a good foundation for the follow-up work of iris recognition and can improve greatly the efficiency of iris recognition. We proposed four new feasible network schemes, and the best network model fully dilated convolution combining U-Net (FD-UNet) is obtained by training and testing on the same datasets. The FD-UNet uses dilated convolution instead of original convolution to extract more global features so that the details of images can be processed better. 

Reference Paper IEEE 2019
A Robust Iris Segmentation Scheme Based on Improved U-Net
Publisher: IEEE
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8744291

Scene to Text Conversion and Pronunciation for Visually Impaired People

 Machine learning algorithms and artificial intelligence are becoming elementary tools, which are used in the establishment of modern smart systems across the globe. In this context, an effective approach is suggested for automated text detection and recognition for the natural scenes. The incoming image is firstly enhanced by employing Contrast Limited Adaptive Histogram Equalization (CLAHE). Afterward, the text regions of the enhanced image are detected by employing the Maximally Stable External Regions (MSER) feature detector. The non-text MSERs are removed by employing appropriate filters. The remaining MSERs are grouped into words. The text recognition is performed by employing an Optical Character Recognition (OCR) function. The extracted text is pronounced by using a suitable speech synthesizer. The proposed system prototype is realized. The system functionality is verified with the help of an experimental setup. Results prove the concept and working principle of the devised system

Reference Paper IEEE 2019
Scene to Text Conversion and Pronunciation for Visually Impaired People
Published in: 2019 Advances in Science and Engineering Technology International Conferences (ASET)
https://ieeexplore.ieee.org/document/8714269

Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks

This Project is based on a disruptive hypothesis for periocular biometrics-in visible-light data, the recognition performance is optimized when the components inside the ocular globe (the iris and the sclera) are simply discarded, and the recognizer’s response is exclusively based on the information from the surroundings of the eye. As a major novelty, we describe a processing chain based on convolution neural networks (CNNs) that defines the regions-of-interest in the input data that should be privileged in an implicit way, i.e., without masking out any areas in the learning/test samples. By using an ocular segmentation algorithm exclusively in the learning data, we separate the ocular from the periocular parts. Then, we produce a large set of “multi-class” artificial samples, by interchanging the periocular and ocular parts from different subjects. These samples are used for data augmentation purposes and feed the learning phase of the CNN, always considering as label the ID of the periocular part. This way, for every periocular region, the CNN receives multiple samples of different ocular classes, forcing it to conclude that such regions should not be considered in its response. During the test phase, samples are provided without any segmentation mask and the network naturally disregards the ocular components, which contributes for improvements in performance.

Reference Paper IEEE 2019
Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 13 , Issue: 4 , April 2018 )
https://ieeexplore.ieee.org/document/8101565

A Method for Localizing the Eye Pupil for Point-of-Gaze Estimation

The estimation of the point of gaze in a scene presented on a digital screen has many applications, such as fatigue detection and attention tracking. Some popular applications of eye tracking through gaze estimation are depicted in Fig. 1. When estimating the point of gaze, indentifying the visual focus of a person within a scene is required. This is known as the eye fix or point of fixation. Finding the point of gaze involves tracking different features of human eyes. Various methods are available for eye tracking, some of which use special contact lenses, whereas others focus on electrical potential measurements. Optical tracking is a nonintrusive technique that uses a sequence of image frames of eyes that have been recorded using video-capturing devices. This technique is popularly known as video oculography.

Reference Paper IEEE 2019
A Method for Localizing the Eye Pupil for Point-of-Gaze Estimation
Published in: IEEE Potentials ( Volume: 38 , Issue: 1 , Jan.-Feb. 2019 )
https://ieeexplore.ieee.org/document/8595416

A Study on Feature Extraction Methods Used to Estimate a Driver’s Level of Drowsiness

A driver’s condition can be estimated not only by basic characteristics such as gender, age, and driving experience, but also by a driver’s facial expressions, bio-signals, and driving behaviours. Recent developments in video processing using machine learning have enabled images obtained from cameras to be analysed with high accuracy. Therefore, based on the relationship between facial features and a driver’s drowsy state, variables that reflect facial features have been established. In this paper, we proposed a method for extracting detailed features of the eyes, the mouth, and positions of the head using OpenCV and Dlib library in order to estimate a driver’s level of drowsiness.

Reference Paper IEEE 2019
A Study on Feature Extraction Methods Used to Estimate a Driver’s Level of Drowsiness
Published in: 2019 21st International Conference on Advanced Communication Technology (ICACT)
https://ieeexplore.ieee.org/document/8701928

Computer Vision based drowsiness detection for motorized vehicles with Web Push Notifications

A real-time video system captures the face of the driver and a pre-trained machine learning model detects the eye boundaries from that real-time video stream. Then each eye is represented by 6 – coordinates (x,y) starting from the left corner of the eye and then working clockwise around the eye. The EAR (Ear Aspect Ratio) is calculated for 20 consecutive frames, which if less than a threshold sounds an alarm and sends an alert on your mobile device through a Web Push Notification. The alert when opened also shows some coffee shops near the driver’s location to increase the driver’s alertness.

Reference Paper IEEE 2019
Computer Vision based drowsiness detection for motorized vehicles with Web Push Notifications
Published in: 2019 4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU)
https://ieeexplore.ieee.org/document/8777652

A Video Processing Based Eye Gaze Recognition Algorithm for Wheelchair Control

In this project a novel methodology to perform iris segmentation and gaze recognition has been introduced and described. The method elaborated utilizes a segmentation algorithm which can successfully extract the iris under varying lighting conditions with the help of machine learning. All experiments were conducted using the MATLAB R2013a software and a speed improvement of almost 3.433 times was achieved as opposed to other popular methods of iris extraction. In terms of accuracy, the algorithm proved to be 86% accurate and was also adopted to control an actual wheelchair.

Reference Paper IEEE 2019
A Video Processing Based Eye Gaze Recognition Algorithm for Wheelchair Control
Published in: 2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT)
https://ieeexplore.ieee.org/document/8770025

A Monocular Vision Sensor-Based Efficient SLAM Method for Indoor Service Robots

This project presents a new implementation method for efficient simultaneous localization and mapping using a forward-viewing monocular vision sensor. The method is developed to be applicable in real time on a low-cost embedded system for indoor service robots. In this paper, the orientation of a robot is directly estimated using the direction of the vanishing point. Then, the estimation models for the robot position and the line landmark are derived as simple linear equations. Using these models, the camera poses and landmark positions are efficiently corrected by a local map correction method. The performance of the proposed method is demonstrated under various challenging environments using dataset-based experiments using a desktop computer and real-time experiments using a low-cost embedded system. The experimental environments include a real home-like setting. These conditions contain low-textured areas, moving people, or changing environments. The proposed method is also tested using the robotics advancement through web publishing of sensorial and elaborated extensive datasets benchmark dataset.

Reference Paper IEEE 2019
A Monocular Vision Sensor-Based Efficient SLAM Method for Indoor Service Robots
Published in: IEEE Transactions on Industrial Electronics ( Volume: 66 , Issue: 1 , Jan. 2019 )
https://ieeexplore.ieee.org/document/8338158

Helmet Detection Based On Improved YOLO Deep Model

Helmet wearing is very important to the safety of workers at construction sites and factories. How to warn/identify/certify workers “whether or not the helmet is worn” is often a difficult point for enterprises to monitor. Based on the YOLO V3 full-regression deep neural network architecture, this paper utilizes the advantage of Densenet in model parameters and technical cost to replace the backbone of the YOLO V3 network for feature extraction, thus forming the so-called YOLO-Densebackbone convolutional neural network. The test results show that the improved model can effectively deal with situations that the helmet is stained, partially occluded, or there are many targets with a low image resolution. In the test set, compared with the traditional YOLO V3, the improved algorithm detection accuracy increased by 2.44% with the same detection rate. 

Reference Paper IEEE 2019
Helmet Detection Based On Improved YOLO Deep Model
Published in: 2019 IEEE 16th International Conference on Networking, Sensing and Control (ICNSC)
https://ieeexplore.ieee.org/document/8743246

Neural Network-Based Vehicle and Pedestrian Detection for Video Analysis System

In this work vehicles and pedestrians are considered objects of interest. Modern artificial neural networks are able to detect and localize objects of known classes. This allows them to be used in various technical vision systems and video analysis systems. In this paper we compare three architectures (YOLO, Faster R-CNN, SSD) by the following criteria: processing speed, mAP, precision and recall.

Reference Paper IEEE 2019
Neural Network-Based Vehicle and Pedestrian Detection for Video Analysis System
Published in: 2019 8th Mediterranean Conference on Embedded Computing (MECO)
https://ieeexplore.ieee.org/document/8760125

Pedestrian Detection and Location Algorithm Based on Deep Learning

This project studies the insufficient extracted image feature in CNN basic network towards large model parameters quantity in convolutional neural network-based target detection model. First, it analyzes calculating method and parameter quantity of separable convolution and standard convolution, and processes original image through increasing sampling layer and blocking area extraction layer on Kronecker. Then, original image will be sampled with two different ratios to form image pyramid sequence and splice two-layered pyramid image in order to guarantee the same original image size. Furthermore, a better learning ability in network can be enhanced under condition without increasing networked scale through multi-scaled training methods.

Reference Paper IEEE 2019
Pedestrian Detection and Location Algorithm Based on Deep Learning
Published in: 2019 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS)
https://ieeexplore.ieee.org/document/8669579

Deep Learning based Automated Billing Cart

shopping malls have become an integral part of life and people in cities often go to shopping malls in order to purchase their daily requirements. In such a place, the environment must be made hassle-free. Our system is mainly designed for edible objects like fruits and vegetables. For edible products like vegetables and fruits, bar-codes and RFID tags cannot be used as they have to be stuck on each of the items and the weight of each item has to be individually measured. The proposed system consists of a camera which detects the commodity using Deep Learning techniques and a load cell which measures the weight of the commodity attached to the shopping cart. This system will generate the bill when the customer scans the item in front of the camera which is fixed on to the Cart.

Reference Paper IEEE 2019
Deep Learning based Automated Billing Cart
Published in: 2019 International Conference on Communication and Signal Processing (ICCSP)
https://ieeexplore.ieee.org/document/8697995

A Novel Real-time Driver Monitoring System Based on Deep Convolutional Neural Network

In this project, we propose a novel real-time driver monitoring system based on deep convolutional neural network. Our system can efficiently detect head and facial features. It also allows to accurately estimate the distance between driver’s head and camera, and vertical and horizontal rotation angles of head. Our work is inspired by the third version of YOLO (YOLOv3), a well-known objects detection algorithm. We have successfully introduced important improvements on YOLOv3 to further fasten the detection speed for excellent accuracy. Indeed, we reduced the depth and width of the backbone network of YOLO. We called this refined network HeadNet. Then, we implemented K-means algorithm to find appropriate anchors for head and facial features. 

Reference Paper IEEE 2019
A Novel Real-time Driver Monitoring System Based on Deep Convolutional Neural Network
Published in: 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE)
https://ieeexplore.ieee.org/document/8790428

Deep Learning for Logo Detection

We present a deep learning system for automatic logo detection in real world images. We base our detector on the popular framework of FasterR-CNN and compare its performance to other models such as Mask R-CNN or RetinaNet. We perform a detailed empirical analysis of various design and architecture choices and show how these can have much higher influence than algorithmic tweaks or popular techniques such as data augmentation. We also provide a systematic detection performance comparison of various models on multiple popular datasets including FlickrLogos-32, TopLogo-10 and recently introduced QMUL-OpenLogo benchmark, which allows for a direct comparison between recently proposed extensions.

Reference Paper IEEE 2019
Deep Learning for Logo Detection
Published in: 2019 42nd International Conference on Telecommunications and Signal Processing (TSP)
https://ieeexplore.ieee.org/document/8769038

Brain tumor Classification and Segmentation using Faster R-CNN

This project proposes a Convolutional Neural Network (CNN), for classification problem and Faster Region based Convolutional Neural Network (Faster R-CNN) for segmentation problem with reduced number of computations with a higher accuracy level. This research has used 218 images as training set and the systems shows an accuracy of 100% in Meningioma and 87.5% in Glioma classifications and an average confidence level of 94.6% in segmentation of Meningioma tumors. The segmented tumor regions are validated through ground truth analysis and manual analysis by a Neurologist.

Reference Paper IEEE 2019
Brain tumor Classification and Segmentation using Faster R-CNN
Published in: 2019 Advances in Science and Engineering Technology International Conferences (ASET)
https://ieeexplore.ieee.org/document/8714263

Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild

In this project, we have designed and implemented a detector by adopting the framework of faster R-convolutional neural networks (CNN) and the structure of MobileNet. Here, color and shape information have been used to refine the localizations of small traffic signs, which are not easy to regress precisely. Finally, an efficient CNN with asymmetric kernels is used to be the classifier of traffic signs. Both the detector and the classifier have been trained on challenging public benchmarks. The results show that the proposed detector can detect all categories of traffic signs. The detector and the classifier proposed here are proved to be superior to the state-of-the-art method. 

Reference Paper IEEE 2019
Real-Time Traffic Sign Recognition Based on Efficient CNNs in the Wild
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 20 , Issue: 3 , March 2019 )
https://ieeexplore.ieee.org/document/8392744

Lung Nodule Detection With Deep Learning in 3D Thoracic MR Images

In this project, a lung nodule detection method based on deep learning is proposed for thoracic MR images. With parameter optimizing, spatial three-channel input construction, and transfer learning, a faster R-convolution neural network (CNN) is designed to locate the lung nodule region. Then, a false positive (FP) reduction scheme based on anatomical characteristics is designed to reduce FPs and preserve the true nodule. The proposed method is tested on 142 T2-weighted MR scans from the First Affiliated Hospital of Guangzhou Medical University. The sensitivity of the proposed method is 85.2% with 3.47 FPs per scan. The experimental results demonstrate that the designed faster R-CNN network and the FP reduction scheme are effective in the lung nodule detection and the FP reduction for MR images.

Reference Paper IEEE 2019
Lung Nodule Detection With Deep Learning in 3D Thoracic MR Images
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8668396

Intelligent monitoring of indoor surveillance video based on deep learning

This project combines deep learning methods, using the state-of-the-art framework for instance segmentation, called Mask R-CNN, to train the fine-tuning network on our datasets, which can efficiently detect objects in a video image while simultaneously generating a high-quality segmentation mask for each instance. The experiment show that our network is simple to train and easy to generalize to other datasets, and the mask average precision is nearly up to 98.5% on our own datasets.

Reference Paper IEEE 2019
Intelligent monitoring of indoor surveillance video based on deep learning
Published in: 2019 21st International Conference on Advanced Communication Technology (ICACT)
https://ieeexplore.ieee.org/document/8701964

Deep learning-based hand gesture recognition for collaborative robots

This project is a first step towards a smart hand gesture recognition set up for Collaborative Robots using a Faster R-CNN Object Detector to find the accurate position of the hands in RGB images. In this work, a gesture is defined as a combination of two hands, where one is an anchor and the other codes the command for the robot. Other spatial requirements are used to improve the performances of the model and filter out the incorrect predictions made by the detector. As a first step, we used only four gestures.

Reference Paper IEEE 2019
Deep learning-based hand gesture recognition for collaborative robots
Published in: IEEE Instrumentation & Measurement Magazine ( Volume: 22 , Issue: 2 , April 2019 )
https://ieeexplore.ieee.org/document/8674634

Breast Cancer Detection Using Extreme Learning Machine Based on Feature Fusion With CNN Deep Features

A computer-aided diagnosis (CAD) system based on mammograms enables early breast cancer detection, diagnosis, and treatment. However, the accuracy of the existing CAD systems remains unsatisfactory. This paper explores a breast CAD method based on feature fusion with convolutional neural network (CNN) deep features. First, we propose a mass detection method based on CNN deep features and unsupervised extreme learning machine (ELM) clustering. Second, we build a feature set fusing deep features, morphological features, texture features, and density features. Third, an ELM classifier is developed using the fused feature set to classify benign and malignant breast masses. Extensive experiments demonstrate the accuracy and efficiency of our proposed mass detection and breast cancer classification method.

Reference Paper IEEE 2019
Breast Cancer Detection Using Extreme Learning Machine Based on Feature Fusion With CNN Deep Features
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8613773

Glaucoma Detection Using Fundus Images of The Eye

Glaucoma is one of the leading causes of irreversible blindness in people over 40 years old. In Colombia there is a high prevalence of the disease, being worse the fact that there is not enough ophthalmologists for the country’s population. Fundus imaging is the most used screening technique for glaucoma detection for its trade-off between portability, size and costs. In this paper we present a computational tool for automatic glaucoma detection. We report improvements for disc segmentation in comparison with other works on the literature, a novel method to segment the cup by thresholding and a new measure between the size of the cup and the size of the disc. 

Reference Paper IEEE 2019
Glaucoma Detection Using Fundus Images of The Eye
Published in: 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA)
https://ieeexplore.ieee.org/document/8730250

Recognition of Diabetic Retinopathy Based on Transfer Learning

This project proposes a method for diabetic retinopathy recognition based on transfer learning. First, download data from Kaggle’s official website, then perform data enhancement, include data amplification, flipping, folding, and contrast adjustment. Then, use pretrained model such asVGG19, InceptionV3, Resnet50 and so on. Each neural network has been trained by ImageNet dataset already. What we need to do is migrate the DR images to these models. Finally, the images are divided into 5 types by the serious degree of diabetic retinopathy. The experimental results shows that the classification accuracy of this method can reach at 0.60, which is better than the traditional direct training method and has better robustness and generalization.

Reference Paper IEEE 2019
Recognition of Diabetic Retinopathy Based on Transfer Learning
Published in: 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA)
https://ieeexplore.ieee.org/document/8725801

An Iterative Image Inpainting Method Based on Similarity of Pixels Values

Image inpainting is a process of completion of missing places by using other undamaged sections of the image or removal of unwanted objects of the image. In this study, we propose a novel image inpainting method. This method constitutes an essential place in image processing. This proposed method fills the corrupted area by using similarity of the boundary pixels values around that corrupted regions in every iteration step. Afterwards, to evaluate image inpainting quality of the proposed method, we use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) metrics and present some visual results. The acquired results show that our proposed inpainting method gives an outstanding performance to fill the corrupted areas and to remove objects. 

Reference Paper IEEE 2019
An Iterative Image Inpainting Method Based on Similarity of Pixels Values
Published in: 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE)
https://ieeexplore.ieee.org/document/8792492

Improved Background Subtraction-based Moving Vehicle Detection by Optimizing Morphological Operations using Machine Learning

Object detection represents the most important component of Automated Vehicular Surveillance (AVS) systems. Moving vehicle detection based on background subtraction, with fixed morphological parameters, is a popular approach in AVS systems. However, the performance of such an approach deteriorates in the presence of sudden illumination changes in the scene. To address this issue, this paper proposes a method to adjust in real-time the morphological parameters to the illumination changes in the scene. The method is based on machine learning. The features used in the machine learning models are first, second, third and fourth-order statistics of the grayscale images, and the outputs are the appropriate morphological parameters. The resulting background subtraction-based object detection is shown to be robust to illumination changes, and to significantly outperform the conventional approach. Further, artificial neural network (ANN) is shown to provide better performance than Naive Bayes and K-Nearest Neighbours models.

Reference Paper IEEE 2019
Improved Background Subtraction-based Moving Vehicle Detection by Optimizing Morphological Operations using Machine Learning
Published in: 2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA)
https://ieeexplore.ieee.org/document/8778263

Background Subtraction with Real-time Semantic Segmentation



 In this project, we explore this problem from a new perspective and propose a novel background subtraction framework with real-time semantic segmentation (RTSS). Our proposed framework consists of two components, a traditional BGS segmenter B and a real-time semantic segmenter S. The BGS segmenter B aims to construct background models and segments foreground objects. The realtime semantic segmenter S is used to refine the foreground segmentation outputs as feedbacks for improving the model updating accuracy. B and S work in parallel on two threads. For each input frame It, the BGS segmenter B computes a preliminary foreground/background (FG/BG) mask Bt. At the same time, the real-time semantic segmenter S extracts the object-level semantics St. Then, some specific rules are applied on Bt and St to generate the final detection Dt. Finally, the refined FG/BG mask Dt is fed back to update the background model.

Reference Paper IEEE 2019
Background Subtraction with Real-time Semantic Segmentation
Published in: IEEE Access ( Early Access )
https://ieeexplore.ieee.org/document/8645635

A Deep Learning RCNN Approach for Vehicle Recognition in Traffic Surveillance System

Automatic moving vehicle detection and recognition are the crucial steps in traffic surveillance applications. Frame extraction is the prior step, which is followed by box filter based background estimation and removal. Box filter based background estimation is used to smoothen the rapid variations, due to the movement of vehicles. Moving vehicles are then detected by analyzing the pixel wise variations between estimated background and input frames. Vehicle detection phase is then followed by recognition phase to classify variant vehicle classes. The deep learning framework Region based Convolutional Neural Network(RCNN) is implemented for the recognition of vehicles with region proposals. Due to the existence of region proposal in RCNN, computational multiplicity is reduced. 

Reference Paper IEEE 2019
A Deep Learning RCNN Approach for Vehicle Recognition in Traffic Surveillance System
Published in: 2019 International Conference on Communication and Signal Processing (ICCSP)
https://ieeexplore.ieee.org/document/8698018

Deep Foreground Segmentation using Convolutional Neural Network

This paper proposes foreground segmentation algorithm powered by the convolutional neural network. The task requires CNN network to extract features from given image and upsample the image to segment background and foreground. The proposed algorithm consists of two networks. VGG-16 based CNN is used to extract the feature from the given image. The feature maps are upsampled using deconvolution network. The upsample image is segmented with sigmoid and threshold to get background foreground information. The proposed method is tested on all the categories of the change detection dataset. The dataset consists of 11 challenging categories such as dynamic background, bad weather, camera jitter, low frame rate, etc. The proposed method has been compared with state of the art foreground detection algorithms to prove effectiveness.

Reference Paper IEEE 2019
Deep Foreground Segmentation using Convolutional Neural Network
Published in: 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE)
https://ieeexplore.ieee.org/document/8781278

Adaptive Multiple-pixel Wide Seam Carving

Seam carving method is an effective image retargeting method which suffers from high computational complexity. It requires one to find one-pixel wide minimum energy path in either vertical or horizontal direction, called seam, to reduce the image size by one pixel. In this paper, we propose an acceleration of the seam carving method by expanding the width of the seam making it multiple-pixel wide seam carving. The two types of energies: one corresponding to the pixels to be removed and another corresponding to the pixels across the multiple-pixel wide seam, increase as the width of the seam increases. In order to prevent the increase in these energies, we make the width of the seam adaptive as a function of the number of iterations. We find the width of a seam for each iteration as a prior for the seam carving process using a set of maximum energy seams in an orthogonal direction to the seam carving process. Qualitative and quantitative results prove that the proposed method performs faster and better than the other state-of-the-art image retargeting operators.

Reference Paper IEEE 2019
Adaptive Multiple-pixel Wide Seam Carving
Published in: 2019 National Conference on Communications (NCC)
https://ieeexplore.ieee.org/document/8732245

Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks

This project proposes a deep learning approach that is based on improved convolutional neural networks (CNNs) for the real-time detection of apple leaf diseases. In this paper, the apple leaf disease dataset (ALDD), which is composed of laboratory images and complex images under real field conditions, is first constructed via data augmentation and image annotation technologies. Based on this, a new apple leaf disease detection model that uses deep-CNNs is proposed by introducing the GoogLeNet Inception structure and Rainbow concatenation. Finally, under the hold-out testing dataset, using a dataset of 26,377 images of diseased apple leaves, the proposed INAR-SSD (SSD with Inception module and Rainbow concatenation) model is trained to detect these five common apple leaf diseases. The experimental results show that the INAR-SSD model realizes a detection performance of 78.80% mAP on ALDD, with a high-detection speed of 23.13 FPS. The results demonstrate that the novel INAR-SSD model provides a high-performance solution for the early diagnosis of apple leaf diseases that can perform real-time detection of these diseases with higher accuracy and faster detection speed than previous methods.

Reference Paper IEEE 2019
Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8706936

An image preprocessing method for kidney stone segmentation in Ultrasound scan images

Reference Paper IEEE 2018
An image preprocessing method for kidney stone segmentation in CT scan images
Published in: 2018 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM)

Automated Tuberculosis detection using Deep Learning

Tuberculosis(TB) in India is the world’s largest TB epidemic [1] leading to 480,000 deaths every year [2]. Between the years 2006 and 2014, Indian economy lost $340 billion(USD) due to TB. This, combined with the emergence of drug resistant bacteria in India makes the problem worse [3]. The government of India has hence come up with a new strategy which requires a high-sensitivity microscopy based TB diagnosis mechanism [2]. We propose a new deep neural network based TB diagnosis methodology with recall and precision of 83.78% and 67.55% respectively for bacillus detection from microscopy images of sputum. The proposed method takes a microscopy image of sputum with proper zoom level as input and returns locations of suspected Mycobacterium tuberculosis bacilli as output. The high sensitivity of our method gives it the potential to evolve into an effective and accessible screening tool for TB detection, when trained at scale

Reference Paper IEEE 2018
Automated Tuberculosis detection using Deep Learning
Published in: 2018 IEEE Symposium Series on Computational Intelligence (SSCI)
https://ieeexplore.ieee.org/document/8628800

Shadow detection and removal from images using machine learning and morphological operations

A machine learning algorithm ESRT (enhanced streaming random tree) model is proposed. The image is converted to HSV and 26 parameters are taken as image measurements. A dataset in Attribute Relation File Format is created for shadow and non-shadow images. The algorithm is trained using the training dataset and tested using the test dataset. Segmentation is performed. The similar threshold homogeneity pixel is grouped. Colour chromaticity is used to remove cast shadow. Morphological processing is performed to remove the shadow from the image. The algorithm shows better detection rate and accuracy compared with Bayesian classifiers available in WEKA.

Reference Paper IEEE 2019
Shadow detection and removal from images using machine learning and morphological operations
Published in: The Journal of Engineering ( Volume: 2019 , Issue: 1 , 1 2019 )
https://ieeexplore.ieee.org/document/8627060

Semantic Food Segmentation for Automatic Dietary Monitoring

Automatic food analysis has been an important task for not only personal dietary monitoring to treat and control health-related problems, but can also find usage at public environments such as smart restaurants where food recommendations are made based on calorie counting. In such applications a very crucial stage for correct calorie measurement is the accurate segmentation of food regions. In this work, we address semantic segmentation of food images with Deep Learning. Additionally, we explore food and non-food segmentation by getting advantage of supervised learning. Experimental results show that followed approach brings appealing results on semantic food segmentation and significantly advances on food and non-food segmentation.

Reference Paper IEEE 2019
Semantic Food Segmentation for Automatic Dietary Monitoring
Published in: 2018 IEEE 8th International Conference on Consumer Electronics – Berlin (ICCE-Berlin)
https://ieeexplore.ieee.org/document/8576231

Food calorie measurement using deep learning neural network

Accurate methods to measure food and energy intake are crucial for the battle against obesity. Providing users/patients with convenient and intelligent solutions that help them measure their food intake and collect dietary information are the most valuable insights toward long-term prevention and successful treatment programs. In this paper, we propose an assistive calorie measurement system to help patients and doctors succeed in their fight against diet-related health conditions. Our proposed system runs on smartphones, which allow the user to take a picture of the food and measure the amount of calorie intake automatically. In order to identify the food accurately in the system, we use deep convolutional neural networks to classify 10000 high-resolution food images for system training. 

Reference Paper IEEE 2016
Food calorie measurement using deep learning neural network
Published in: 2016 IEEE International Instrumentation and Measurement Technology Conference Proceedings
https://ieeexplore.ieee.org/document/7520547

Single Image Dehazing Using Dark Channel Fusion and Haze Density Weight

Reference Paper IEEE 2019
Single Image Dehazing Using Dark Channel Fusion and Haze Density Weight
Published in: 2019 IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC)
https://ieeexplore.ieee.org/document/8784493

A Fuzzy Expert System Design for Diagnosis of Skin Diseases

Skin diseases are common in rural communities and flood affected areas. Preferably, skin disease should be treated without delay by a dermatologist. But due to shortage of expertise in rural areas, it is impossible so far. An expert system is capable of providing timely and correct diagnosis, that’s why building an expert system is a potential challenge. In this paper we will present a design of fuzzy expert system for the detection of skin (erythemato squamous) diseases. Because uncertainty and impreciseness among the symptoms in diagnosis process, we choose fuzzy logic based design. Fuzzy logic facilitates to deal with imprecise boundaries of input symptoms in medical expert system. Consequently, reliability of systems results will increase. For implementing the system, we use MATLAB fuzzy logic toolbox. Fuzzy logic controller generates a result from given symptoms using Mamdani MIN-MAX inference mechanism and for defuzzification uses centroid (COG) method. The accuracy of the designed fuzzy expert system was 90.27%. 

Reference Paper IEEE 2019
A Fuzzy Expert System Design for Diagnosis of Skin Diseases
Published in: 2019 2nd International Conference on Advancements in Computational Sciences (ICACS)
https://ieeexplore.ieee.org/document/8689140

Sharp Curve Lane Detection for Autonomous Driving

Sharp curve lane detection is one of the challenges of visual environment perception technology for autonomous driving. In this paper, a new hyperbola fitting based method of curve lane detection is proposed. The method mainly includes three parts: extraction, clustering, and hyperbola fitting of lane feature points. We compared our method with the Bezier curve fitting based, the least squares curve fitting based, the spline fitting based methods, and an existing hyperbola fitting based method. Experiments show that our method performs better than these methods.

Reference Paper IEEE 2019
Sharp Curve Lane Detection for Autonomous Driving
Published in: Computing in Science & Engineering ( Volume: 21 , Issue: 2 , March-April 1 2019 )
https://ieeexplore.ieee.org/document/8542714

Deep Learning Based Container Text Recognition

Convolutional recurrent neural network (CRNN) and connectionist text proposal network (CTPN) methods cannot extract container text features effectively. This paper proposes a novel Container Text Detection and Recognition Network (CTDRNet) for accurately detecting and recognizing container scene text. The CTDRNet consists of three components: (1) CTDRNet text detection enables to improve detection accuracy for single words; (2) CTDRNet text recognition has faster convergence speed and detection accuracy; (3) CTDRNet post-processing improves detection and recognition accuracy. In the end, the CTDRNet is implemented and evaluated with an accuracy of 96% and processing rate of 2.5 fps.

Reference Paper IEEE 2019
Deep Learning Based Container Text Recognition
Published in: 2019 IEEE 23rd International Conference on Computer Supported Cooperative Work in Design (CSCWD)
https://ieeexplore.ieee.org/document/8791876

Hand Gesture Recognition Software Based on Indian Sign Language

Hand gestures are a powerful environment for communicating with communities with intellectual disability. It is useful for connecting people and computers. The expansion potential of this system can be known in public places where deaf people are communicating with ordinary people to send messages. In this article, we have provided a system of recognizing gestures continuously with the Indian Sign Language (ISL), which both hands are used to make every gesture. Gesture recognition continues to be a daunting task. We tried to fix this problem using the key download method. These key tips are useful for breaking down the sign language gestures into the order of the characters, as well as deleting unsupported frameworks. After the splitting gear breaks each character is regarded as a single and unique gesture. Pre-processing gestures are obtained using histogram (OH) with PCA to reduce the dimensions of the traits obtained after OH. The experiments were performed on our live ISL dataset, which was created using an existing camera

Reference Paper IEEE 2019
Hand Gesture Recognition Software Based on Indian Sign Language
Published in: 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT)
https://ieeexplore.ieee.org/document/8741512

Deep Convolutional Neural Network for Handwritten Numeral Recognition

 Recent bloom in machine learning due to deep neural network especially using Convolutional Neural Network (CNN) showing promising results in this field with better accuracy. Some recent works show very good accuracy only in recognizing plain simple digits but perform poor in challenging scenario because of lack of large and versatile training dataset. In this work, we propose a method where our proposed CNN model which recognizes numerals with high degree of accuracy beyond 96%, even in most challenging noisy conditions. Initially 72000+ specimens were used from NumtaDB (85000+) dataset for training and 1700+ specimens were used as test dataset. The improvement in performance in challenging scenarios is observed, when training specimens are augmented to create a training dataset of size about 114000 specimens. The performance of our proposed model also compared with other existing works and presented here. These findings are based on Computer Vision Challenge on Bengali HandWritten Digit Recognition (2018) competition submissions.

Reference Paper IEEE 2019
Deep Convolutional Neural Network for Bangla Handwritten Numeral Recognition
Published in: 2018 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE)
https://ieeexplore.ieee.org/document/8783151

Video Copy Detection Using Spatio-Temporal CNN Features

To protect the copyright of digital videos, video copy detection has become a hot topic in the field of digital copyright protection. Since a video sequence generally contains a large amount of data, to achieve efficient and effective copy detection, the key issue is to extract compact and discriminative video features. To this end, we propose a video copy detection scheme using spatio-temporal convolutional neural network (CNN) features. First, we divide each video sequence into multiple video clips and sample the frames of each video clip. Second, the sampled frames of each video clip are fed into a pre-trained CNN model to generate the corresponding convolutional feature maps (CFMs). Third, based on the generated CFMs, we extract the CNN features on the spatial and temporal domains of each video clip, i.e., the spatio-temporal CNN features. Finally, video copy detection is efficiently and effectively implemented based on the extracted spatio-temporal CNN features.

Reference Paper IEEE 2019
Video Copy Detection Using Spatio-Temporal CNN Features
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8767987

Deep CNN for removal of salt and pepper noise

Image denoising is a common problem during image processing. Salt and pepper noise may contaminate an image by randomly converting some pixel values into 255 or 0. The traditional image denoising algorithm is based on filter design or interpolation algorithm. There exists no work using the convolutional neural network (CNN) to directly remove salt and pepper noise to the authors’ knowledge. In this study, they utilise CNN with the multi-layer structure for the removal of salt and pepper noise, which contains padding, batch normalisation and rectified linear unit. In training, they divide images into three parts: training set, validation set and test set. Experimental results demonstrate that the architecture can effectively remove salt and pepper noise for the various noisy images. In addition, their model can remove high-density noise well due to the extensive local receptive fields of the deep neural networks. Finally, extensive experimental results show that their denoiser is effective for those images with a large number of interference pixels which may cause misjudgement. In a word, they generalise the application of CNN to salt and pepper noise removal and obtain competitive results.

Reference Paper IEEE 2019
Deep CNN for removal of salt and pepper noise
Published in: IET Image Processing ( Volume: 13 , Issue: 9 , 7 18 2019 )
https://ieeexplore.ieee.org/document/8768516

Gait Recognition for Saudi Costume Using Kinect Skeletal Tracking

Gait is a cutting-edge biometric for recognizing people. Gait data can be captured reliably even from long distances, and it is difficult to cover up or copy. In this paper, we have explored the use of Kinect for gait identification of Saudi persons who are wearing thobe or abaya. These garments hide most of the joints and so gait recognition becomes a challenge. Our algorithm uses Kinect to identify the top three joints that could give the best identification results and then uses them for gait recognition. Features used are the Y coordinates of joints and classifier used is K Nearest Neighbor.

Reference Paper IEEE 2019
Gait Recognition for Saudi Costume Using Kinect Skeletal Tracking
Published in: 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS)
https://ieeexplore.ieee.org/document/8769552

Visually Lossless Compression of Dental Images

This proect deals with analyzing opportunities to perform this in non- iterative way for dental medical images for two versions of a coder based on discrete cosine transform (DCT) – AGU and AGU-M. It is demonstrated that mean squared error (MSE) and MSE modified with taking into account peculiarities of human vision system (MSEHVS) of distortions due to lossy compression can be predicted before starting compression itself. Then, a desired quantization step (QS) for AGU or scaling factor (SF) for AGU-M can be adjusted to provide a desired quality. Regression uses statistics of alternating current (AC) DCT coefficients calculated in 300…500 8×8 pixel blocks to predict output metrics using fitting curves in preliminary obtained scatter-plots.

Reference Paper IEEE 2019
Visually Lossless Compression of Dental Images
Published in: 2019 IEEE 39th International Conference on Electronics and Nanotechnology (ELNANO)
https://ieeexplore.ieee.org/document/8783218

A Stenography Application for Hiding Student Information into an Image

Information security is a major problem today. Different approaches and methods are introduced every day for data protection. One of them is steganography. The word steganography combines the Greek words steganos , meaning “covered, concealed, or protected,” and graphein meaning “writing”. The purpose of steganography is to construct the stego object by placing important information invisible into the ordinary cover object (image, sound, video, text, etc.) and to transmit it to the recipient. In this study, it is aimed to strengthen the LSB technique which is one of the steganography methods by suggesting the use of mask which will provide the least change on the image while hiding the data into a digital image. In the proposed method, the data is also compressed by the LZW algorithm, thus allowing more data to be hidden.

Reference Paper IEEE 2019
A Stenography Application for Hiding Student Information into an Image
Published in: 2019 7th International Symposium on Digital Forensics and Security (ISDFS)
https://ieeexplore.ieee.org/document/8757516

Integration of Digital Watermarking Technique into Medical Imaging Systems

This paper presents the process of integrating digital watermarking technique into medical imaging workflow to evaluate, validate and verify its applicability and appropriateness to medical domains. This is significant to ensure the ability of the proposed approach to tackle security threats that may face medical images during routine medical practices. This work considers two key objectives within the aim of defining a secure and practical digital medical imaging system: current digital medical workflows are deeply analyzed to define security limitations in Picture Archiving and Communication Systems (PACS) of medical imaging; the proposed watermarking approach is then theoretically tested and validated in its ability to operate in a real-world scenario (e.g. PACS). These have been undertaken through identified case studies related to manipulations of medical images within PACS workflow during acquisition, viewing, exchanging and archiving. This work assures the achievement of the identified particular requirements of digital watermarking when applied to digital medical images and also provides robust controls within medical imaging pipelines to detect modifications that may be applied to medical images during viewing, storing and transmitting.

Reference Paper IEEE 2019
Integration of Digital Watermarking Technique into Medical Imaging Systems
Published in: 2019 10th International Conference on Dependable Systems, Services and Technologies (DESSERT)
https://ieeexplore.ieee.org/document/8770051

Secure Message Embedding in 3D Images

In this paper, a multiple layer message security scheme is proposed, utilizing 3D images. The proposed scheme is robust against any means of eavesdropping or intruding as it is comprised of four layers of security as follows: encryption using AES-128, encoding using a repetition code, least significant bit (LSB) steganography and jamming through the addition of noise. The proposed scheme is compared with its counterparts from the literature and is shown to exhibit excellent traits, in terms of being reversible and its ability to carry out blind extraction of the data as well as withstanding geometrical attacks. Furthermore, the proposed scheme exhibits very good performance in terms of the mean squared error (MSE) and the peak signal to noise ratios (PSNR).

Reference Paper IEEE 2019
Secure Message Embedding in 3D Images
Published in: 2019 International Conference on Innovative Trends in Computer Engineering (ITCE)
https://ieeexplore.ieee.org/document/8646685

Hiding Images Within Images

 Deep neural networks are simultaneously trained to create the hiding and revealing processes and are designed to specifically work as a pair. The system is trained on images drawn randomly from the ImageNet database, and works well on natural images from a wide variety of sources. Beyond demonstrating the successful application of deep learning to hiding images, we examine how the result is achieved and apply numerous transformations to analyze if image quality in the host and hidden image can be maintained. These transformation range from simple image manipulations to sophisticated machine learning-based adversaries. Two extensions to the basic system are presented that mitigate the possibility of discovering the content of the hidden image. With these extensions, not only can the hidden information be kept secure, but the system can be used to hide even more than a single image. Applications for this technology include image authentication, digital watermarks, finding exact regions of image manipulation, and storing meta-information about image rendering and content.

Reference Paper IEEE 2019
Hiding Images Within Images
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Early Access )
https://ieeexplore.ieee.org/document/8654686

An Efficient Hand Gesture Recognition System Based on Deep CNN

The goal of this paper is to use a webcam to instantly track the region of interest (ROI), namely, the hand region, in the image range and identify hand gestures for home appliance control (in order to create smart homes) or human-computer interaction fields. Firstly, we use skin color detection and morphology to remove unnecessary background information from the image, and then use background subtraction to detect the ROI. Next, to avoid background influences on objects or noise affecting the ROI, we use the kernelized correlation filters (KCF) algorithm to track the detected ROI. The image size of the ROI is then resized to 100×120 and then entered into the deep convolutional neural network (CNN), in order to identify multiple hand gestures. Two deep CNN architectures are developed in this study that are modified from AlexNet and VGGNet, respectively. Then, the above process of tracking and recognition is repeated to achieve an instant effect, and the system’s execution continues until the hand leaves the camera range. Finally, the training data set can reach a recognition rate of 99.90%, and the test data set has a recognition rate of 95.61%, which represents the feasibility of the practical application.

Reference Paper IEEE 2019
An Efficient Hand Gesture Recognition System Based on Deep CNN
Published in: 2019 IEEE International Conference on Industrial Technology (ICIT)
https://ieeexplore.ieee.org/document/8755038

Bacteria Classification using Image Processing and Deep learning

An automizing process for bacteria recognition becomes attractive to reduce the analyzing time and increase the accuracy of diagnostic process. This research study possibility to use image classification and deep learning method for classify genera of bacteria. We propose the implementation method of bacteria recognition system using Python programing and the Keras API with TensorFlow Machine Learning framework. The implementation results have confirmed that bacteria images from microscope are able to recognize the genus of bacterium. The experimental results compare the deep learning methodology for accuracy in bacteria recognition standard resolution image use case. Proposed method can be applied the high-resolution datasets till standard resolution datasets for prediction bacteria type. However, this first study is limited to only two genera of bacteria.

Reference Paper IEEE 2019
Bacteria Classification using Image Processing and Deep learning
Published in: 2019 34th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC)
https://ieeexplore.ieee.org/document/8793320

Fingerprint Recognition System using MATLAB

In the cutting-edge world, where individuals are utilizing such a significant number of development innovation, security is the way to each perspective. The vast majority of the security frameworks are currently modernized. Computerized security frameworks are fundamental at this point. Fingerprints are distinctive biometrics for various individual, so it has diverse character, which is special for various clients. Human unique finger impression is wealthy in detail called particulars, which can be utilized as recognizable proof imprints for unique fingerprint confirmation. The objective of the undertaking is to build up a total system for unique fingerprint verification through extricating and coordinating details. To accomplish great details extraction in unique finger impression with fluctuating quality pre-processing is connected on unique finger impression before they are assessed. After pre-processing, particulars extraction is done trailed by post processing stage lastly the details coordinating is finished. What’s more, after every one of these stages we get the last coordinated yield, regardless of whether it matches or not.

Reference Paper IEEE 2019
Fingerprint Recognition System using MATLAB
Published in: 2019 International Conference on Automation, Computational and Technology Management (ICACTM)
https://ieeexplore.ieee.org/document/8776680

Enhanced embedded zerotree wavelet algorithm for lossy image coding

Embedded zerotree wavelet (EZW) algorithm is the well-known effective coding technique for low-bit-rate image compression. In this study, the authors propose a modification of this algorithm, namely new enhanced EZW (NE-EZW), allowing to achieve a high compression performance in terms of peak-signal-to-noise ratio and bitrate for lossy image compression. To distribute probabilities in a more efficient way, the proposed approach is based on increasing the number of coefficients not to be encoded by the use of new symbols. Furthermore, the proposed method optimises the binary coding by the use of the compressor cell operator. Experimental results demonstrated the effectiveness of the proposed scheme over the conventional EZW and other improved EZW schemes for both natural and medical image coding applications. 

Reference Paper IEEE 2019
Enhanced embedded zerotree wavelet algorithm for lossy image coding
Published in: IET Image Processing ( Volume: 13 , Issue: 8 , 6 20 2019 )
https://ieeexplore.ieee.org/document/8741344

A Content-based Image Retrieval Scheme using Bag-of-Encrypted-Words in Cloud Computing

Content-based Image Retrieval (CBIR) techniques have been extensively studied with the rapid growth of digital images. Generally, CBIR service is quite expensive in computational and storage resources. Thus, it is a good choice to outsource CBIR service to the cloud server that is equipped with enormous recourses. However, the privacy protection becomes a big problem, as the cloud server cannot be fully trusted. In this paper, we propose an outsourced CBIR scheme based on a novel bag-of-encrypted-words (BOEW) model. The image is encrypted by color value substitution, block permutation, and intra-block pixel permutation. Then, the local histograms are calculated from the encrypted image blocks by the cloud server. All the local histograms are clustered together, and the cluster centers are used as the encrypted visual words. In this way, the bag-of-encrypted-words (BOEW) model is built to represent each image by a feature vector, i.e., a normalized histogram of the encrypted visual words. The similarity between images can be directly measured by the Manhattan distance between feature vectors on the cloud server side.

Reference Paper IEEE 2019
A Content-based Image Retrieval Scheme using Bag-of-Encrypted-Words in Cloud Computing
Published in: IEEE Transactions on Services Computing ( Early Access )
https://ieeexplore.ieee.org/document/8758854

Fish Tracking and Counting using Image Processing

This paper presents a simple method of tracking and counting fish images using an image processing technique. An experiment to capture images of fish population was conducted and fish images were processed using blob analysis and Euclidean filtering. The focus of this paper is to present the image processing technique and test the detection and counting accuracy. The results of the experiment show that the proposed method obtained a high level of detection and accuracy. Finally, recommendations for future improvements are provided.

Reference Paper IEEE 2019
Fish Tracking and Counting using Image Processing
Published in: 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM)
https://ieeexplore.ieee.org/document/8666369

Image Processing Mobile Application For Banana Ripeness Evaluation

Mobile application has been identified as the best platform for the expert system tool to reach as many users as possible. The main contribution of this paper is the development of an expert system tool for evaluating the ripeness of banana fruit. Utilizing Google Cloud Platform, the application sends the sample of banana image through Google Cloud Vision Application Programming Interface to get attribute readings from the sample image. The result of the analysis is compared with application’s database of attributes datasets to determine the ripeness of the banana sample image. In this work, the ripeness of the banana is classified into three different class of maturity; unripe, ripe and overripe systematically based on their key attributes value. This work also involved the process of collecting samples of banana with different level of ripeness, application development and evaluation to improve the accuracy of the developed applications classification results using image processing and data mining techniques.

Reference Paper IEEE 2019
Image Processing Mobile Application For Banana Ripeness Evaluation
Published in: 2018 International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA)
https://ieeexplore.ieee.org/document/8477600

Transfer Learning with Efficient Convolutional Neural Networks for Fruit Recognition

An efficient and effective image based fruit recognition network is critical for supporting mobile application in reality. This paper presents a method to recognize fruit faster and more accurately by using the transfer learning technique. The proposed network performs depthwise separable convolution with thinner factor to reduce the size of vanilla network and improve the performance by adapting global depthwise convolution. Additionally, we make a simple analysis on how those methods reduce the parameters and the cost of computation in training process. In order to test the accuracy and enhance the robustness of the model, we use Fruits-360 dataset which contains 55244 images spread across 81 classes. The experimental results demonstrate that our proposed network is superior to three previous state-of-the-art networks. Moreover, our model has a higher accuracy than the vanilla model with the same thinner factor.

Reference Paper IEEE 2019
Transfer Learning with Efficient Convolutional Neural Networks for Fruit Recognition
Published in: 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC)
https://ieeexplore.ieee.org/document/8729435

A Framework to Estimate the Nutritional Value of Food in Real Time Using Deep Learning Techniques

There has been a rapid increase in dietary ailments during the last few decades, caused by unhealthy food routine. Mobile-based dietary assessment systems that can record real-time images of the meal and analyze it for nutritional content can be very handy and improve the dietary habits and, therefore, result in a healthy life. This paper proposes a novel system to automatically estimate food attributes such as ingredients and nutritional value by classifying the input image of food. Our method employs different deep learning models for accurate food identification. In addition to image analysis, attributes and ingredients are estimated by extracting semantically related words from a huge corpus of text, collected over the Internet. We performed experiments with a dataset comprising 100 classes, averaging 1000 images for each class to acquire top 1 classification rate of up to 85%. An extension of a benchmark dataset Food-101 is also created to include sub-continental foods. Results show that our proposed system is equally efficient on the basic Food-101 dataset and its extension for sub-continental foods. The proposed system is implemented as a mobile app that has its application in the healthcare sector.

Reference Paper IEEE 2019
A Framework to Estimate the Nutritional Value of Food in Real Time Using Deep Learning Techniques
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8590712

Selection-based subpixel-shifted images super-resolution

The subpixel-shifted (SPS) images acquisition method based on imaging system has the limitations of complex structure, difficult production and high cost. Therefore, this paper proposes an image super-resolution reconstruction method based on registration. In the first phase, the registration algorithm is used to select the SPS images. In order to improve the accuracy of the registration algorithm, a registration algorithm combining SIFT-FLANN and misregistration points elimination (SFME) is proposed. In the second phase, an interpolation of nonuniformly spaced samples based on pixel gray correction is proposed to get the high resolution (HR) image. Experiments show that the images selection method can obtain higher-precision SPS images, and the reconstruction method can reconstruct HR image with better visual and higher spatial resolution. 

Reference Paper IEEE 2019
Selection-based subpixel-shifted images super-resolution
Published in: IEEE Access ( Early Access )
https://ieeexplore.ieee.org/document/8794494

Image Deblocking Detection Based on a Convolutional Neural Network

Motion JPEG (MJPEG) is one of the most popular video formats, in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image. By splitting the MJPEG video into JPEG image frames, the tamper might employ powerful multimedia deblocking methods to cover up the video tampering traces. To the best our knowledge, there is no existing method for the forensics of deblocking. In this paper, we propose a novel method to detect deblocking, which can automatically learn feature representations based on a deep learning framework. We first train a supervised convolutional neural network (CNN) to learn the hierarchical features of deblocking operations with labeled patches from the training datasets. The first convolutional layer of the CNN serves as the preprocessing module to efficiently obtain the tampering artifacts. Then, we extract the features for an image with the CNN on the basis of a patch by applying a patch-sized sliding-window to scan the whole image. The generated image representation is then condensed by a simple feature fusion technique, i.e., regional pooling, to obtain the final discriminative feature. The experimental results on several public datasets demonstrate the superiority of the proposed scheme.

Reference Paper IEEE 2019
Image Deblocking Detection Based on a Convolutional Neural Network
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8649625

Real-Time Deep Learning Method for Abandoned Luggage Detection in Video

 In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene’s background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method.

Reference Paper IEEE 2019
Real-Time Deep Learning Method for Abandoned Luggage Detection in Video
Published in: 2018 26th European Signal Processing Conference (EUSIPCO)
https://ieeexplore.ieee.org/document/8553156

Pedestrian Detection Based on YOLO Network Model

This paper improves the network structure of YOLO algorithm and proposes a new network structure YOLO-R. First, three Passthrough layers were added to the original YOLO network. The Passthrough layer consists of the Route layer and the Reorg layer. Its role is to connect the shallow layer pedestrian features to the deep layer pedestrian features and link the high and low resolution pedestrian features. The role of the Route layer is to pass the pedestrian characteristic information of the specified layer to the current layer, and then use the Reorg layer to reorganize the feature map so that the currently-introduced Route layer feature can be matched with the feature map of the next layer. The three Passthrough layers added in this algorithm can well transfer the network’s shallow pedestrian fine-grained features to the deep network, enabling the network to better learn shallow pedestrian feature information. This paper also changes the layer number of the Passthrough layer connection in the original YOLO algorithm from Layer 16 to Layer 12 to increase the ability of the network to extract the information of the shallow pedestrian features. The improvement was tested on the INRIA pedestrian dataset. The experimental results show that this method can effectively improve the detection accuracy of pedestrians, while reducing the false detection rate and the missed detection rate, and the detection speed can reach 25 frames per second.

Reference Paper IEEE 2019
Pedestrian Detection Based on YOLO Network Model
Published in: 2018 IEEE International Conference on Mechatronics and Automation (ICMA)
https://ieeexplore.ieee.org/document/8484698

Facial Recognition using Convolutional Neural Networks and Implementation on Smart Glasses

Facial Recognition possess the importance to give biometric authentication that is used in different applications especially in security. A stored database of the subjects is manipulated using image processing techniques to accomplish this task. This paper proposes a frame work of smart glasses that can recognize the faces. Implementing facial recognition using portable smart glasses can aid law enforcement agencies to detect a suspect’s face. The advantage over security cameras is their portability and good frontal view capturing. The techniques used for the whole process of face recognition are machine learning based because of their high accuracy as compared with other techniques. Face detection is the pre-step for face recognition that is performed using Haar-like features. Detection rate of this method is 98% using 3099 features. Face recognition is achieved using Deep Learning’s sub-field that is Convolutional Neural Network (CNN). It is a multi-layer network trained to perform a specific task using classification. Transfer learning of a trained CNN model that is AlexNet is done for face recognition. It has an accuracy of 98.5% using 2500 variant images in a class. These smart glasses can serve in the security domain for the authentication process.

Reference Paper IEEE 2019
Facial Recognition using Convolutional Neural Networks and Implementation on Smart Glasses
Published in: 2019 International Conference on Information Science and Communication Technology (ICISCT)
https://ieeexplore.ieee.org/document/8777442

Hand Gesture Recognition and Voice Conversion System for Dump People

Sign language plays a major role for dumppeople to communicate with normal people. It is very difficult for mute people to convey their message to normal people. Since normal people are not trained on hand sign language. In emergency time conveying their message is very difficult. So the solution for this problem is to convert the sign language into human hearing voice. There are two major techniques available to detect hand motion or gesture such as vision and non-vision technique and convert the detected information into voice through raspberry pi. In vision based technique camera will be used for gesture detection and non-vision based technique sensors are used. In this project non-vision based technique will be used. Most of the dumb people are deaf also. So the normal people’s voice can be converted into their sign language. In an emergency situation the message will automatically send to their relation or friends.

Reference Paper IEEE 2019
Hand Gesture Recognition and Voice Conversion System for Dump People
Published in: 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS)
https://ieeexplore.ieee.org/document/8728538

Hand gesture recognition enhancement based on spatial fuzzy matching in Leap Motion

Gesture recognition is an important human- computer interaction interface. This paper introduces a novel hand gesture recognition system based on Leap Motion gen. 2. In this system, a spatial fuzzy matching (SFM) algorithm is firstly presented by matching and fusing spatial information to construct a fused gesture dataset. For dynamic hand recognition, an initial frame correction strategy based on SFM is proposed to fast initialize the trajectory of test gesture with respect to the gesture dataset. A notable feature of this system is that it can run on ordinary laptops due to the small size of the fused dataset, which accelerates the calculation of recognition rate. Experiment results show that the system recognizes static hand gestures at recognition rates of 94%-100% and over 90% of dynamic gestures using our collected dataset. This can greatly enhance the usability of Leap Motion.

Reference Paper IEEE 2019
Hand gesture recognition enhancement based on spatial fuzzy matching in Leap Motion
Published in: IEEE Transactions on Industrial Informatics ( Early Access )
https://ieeexplore.ieee.org/document/8772096

Kinect-Based Platform for Movement Monitoring and Fall-Detection of Elderly People

The presented article details our platform for movement monitoring and fall-detection of persons based on data acquired from a Microsoft Kinect v2 sensor. The proposed platform is programmed in the C# programming language for more efficient real-time analysis of the obtained spatial data and future modularity – allowing the integration of other data sources (e.g., thermal sensors, accelerometer data or electrocardiogram recordings) to create a sophisticated monitoring platform. The primary intended use of the platform is to monitor elderly people living alone and, in case of fall detection, transmit relevant information to relatives or medical staff and/or perform specific actions (e.g., turn off kitchen appliances).

Reference Paper IEEE 2019
Kinect-Based Platform for Movement Monitoring and Fall-Detection of Elderly People
Published in: 2019 12th International Conference on Measurement
https://ieeexplore.ieee.org/document/8780004

Smart Home With Virtual Assistant Using Raspberry Pi

“Olivia” is a Virtual Assistant developed specifically for homes, which can be integrated into any home to make it a Smart Home. The user can interact solely through his/her voice with Olivia (the virtual assistant) to get any his/her work done around the house. Olivia-the virtual assistant can be installed anywhere inside any house as it lives inside Raspberry Pi which is a really compact and inexpensive computer and can be connected easily to devices such as microphone, speakers, cameras, PIR etc. Thus having the ability to convert any home into a smart home. In this system we have integrated Olivia into smart door lock for `The Smart Home Surveillance System’ implemented on Raspberry Pi. The system is smart enough to identify and differentiate between the owner and stranger using face recognition and act accordingly. In this system, Olivia can interact with the stranger at the door in case the owner is not present at home and will notify the owner about the visit using Email and SMS along with the image of the stranger. Similarly, Olivia can be integrated to other systems and appliances such as tube lights, air conditioners etc. making them smarter. The virtual assistant is highly beneficial for visually impaired people as it can perform various functions inside the house such as telling about the weather, stock prices, performing various calculations, telling jokes or playing songs all solely through voice.

Reference Paper IEEE 2019
Smart Home With Virtual Assistant Using Raspberry Pi
Published in: 2019 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence)
https://ieeexplore.ieee.org/document/8776918

Scene to Text Conversion and Pronunciation for Visually Impaired People

The recent technological advancements are focusing on developing smart systems to improve the quality of life. Machine learning algorithms and artificial intelligence are becoming elementary tools, which are used in the establishment of modern smart systems across the globe. In this context, an effective approach is suggested for automated text detection and recognition for the natural scenes. The incoming image is firstly enhanced by employing Contrast Limited Adaptive Histogram Equalization (CLAHE). Afterward, the text regions of the enhanced image are detected by employing the Maximally Stable External Regions (MSER) feature detector. The non-text MSERs are removed by employing appropriate filters. The remaining MSERs are grouped into words. The text recognition is performed by employing an Optical Character Recognition (OCR) function. The extracted text is pronounced by using a suitable speech synthesizer. The proposed system prototype is realized. The system functionality is verified with the help of an experimental setup. Results prove the concept and working principle of the devised system. It shows the potential of employing the suggested method for the development of modern devices for visually impaired people.

Reference Paper IEEE 2019
Scene to Text Conversion and Pronunciation for Visually Impaired People
Published in: 2019 Advances in Science and Engineering Technology International Conferences (ASET)
https://ieeexplore.ieee.org/document/8714269

Automatic Diagnosis of Thyroid Ultrasound Image Based on FCN-AlexNet and Transfer Learning

An automatic method applied to the thyroid ultrasound images for lesion localization and diagnosis of benign and malignant lesions was proposed in this paper. The FCN-AlexNet of deep learning method was used to segment images, and accurate localization of thyroid nodules was achieved. Then, the method of transfer learning was introduced to solve the problem of training data shortages during training process. According to the performance of AlexNet in classification, it was used to diagnose benign and malignant lesions. The localization effects of TBD, RGI, PAORGB, and ASPS methods were comparatively evaluated by IoU indicators, and the accuracy of benign and malignant diagnosis of those methods are evaluated by Accuracy, Sensitivity, Specificity, and AUC. The experimental results shown that the proposed method has better performance in localization and diagnosis of benign and malignant lesions.

Published in: 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)

Optimization and Hardware Implementation of Image and Video Watermarking for Low-Cost Applications

The prevalence of wireless networks has made the long-term need for communications security more imperative. In various wireless applications, images and/or video constitute critical data for transmission. For their copyright protection and authentication, watermarking can be used. In many cases, the cost of wireless nodes must be kept low, which means that their processing and/or power capabilities are very limited. In such cases, low-cost hardware implementations of digital image/video watermarking techniques are necessary. However, to end up with such implementations, proper selection of watermarking techniques is not enough. For this reason, in this paper, we introduce computation optimizations of the implemented algorithm to keep the integer part of arithmetic operations at optimal size, and, hence, arithmetic units as small as possible. In addition, further analysis is performed to reduce quantization error. Three different hardware-architecture variants, two for image watermarking and one for video (pipelined), are proposed, which reutilize the already small arithmetic units in different computation steps, to further reduce implementation cost. The proposed designs compare favorably to already existing implementations in terms of area, power, and performance. Moreover, the watermarked images’/frames’ errors, compared to their floating point counterparts, are very small, while robustness to various attacks is high.

Reference Paper IEEE 2019
Optimization and Hardware Implementation of Image and Video Watermarking for Low-Cost Applications
Published in: IEEE Transactions on Circuits and Systems I: Regular Papers ( Volume: 66 , Issue: 6 , June 2019 )
https://ieeexplore.ieee.org/document/8694927

A Strawberry Detection System Using Convolutional Neural Networks

In recent years, robotic technologies, e.g. drones or autonomous cars have been applied to the agricultural sectors to improve the efficiency of typical agricultural operations. Some agricultural tasks that are ideal for robotic automation are yield estimation and robotic harvesting. For these applications, an accurate and reliable image-based detection system is critically important. In this work, we present a low-cost strawberry detection system based on convolutional neural networks. Ablation studies are presented to validate the choice of hyper-parameters, framework, and network structure. Additional modifications to both the training data and network structure that improve precision and execution speed, e.g., input compression, image tiling, color masking, and network compression, are discussed. Finally, we present a final network implementation on a Raspberry Pi 3B that demonstrates a detection speed of 1.63 frames per second and an average precision of 0.842.

Reference Paper IEEE 2019
A Strawberry Detection System Using Convolutional Neural Networks
Published in: 2018 IEEE International Conference on Big Data (Big Data)
https://ieeexplore.ieee.org/document/8622466

Crops Disease Diagnosing Using Image-Based Deep Learning Mechanism

To increase the crop productivity environmental factors or product resource, such as temperature, humidity, labor and electrical costs are important. However, above all, crop disease is the crucial factor and causes 20-30% reduction of the productivity in case of its infection. Thus, the disease of the crop is much more important factor affecting the productivity of the crops. Therefore, the farmer concentrates on the cause of the disease in the crops during its growth, but it is not easy to recognize the disease on the spot. Until now, they just relied on the opinion of the experts or their own experiences when the disease is doubtful. However, it triggers a decrease in productivity as no taking appropriate action and time. In this paper, to address this problem we provide the mechanism, which dynamically analyses the images of the disease. The analysis result is immediately sent to the farmer required the decision and then feedback from the farmer is reflected to the model. The mechanism performs the diagnosing of the disease, especially for the strawberry fruits and leaves, with data set of images using deep learning. Thus, it encourages increasing of the productivity through the fast recognition of disease and the consequent action.

Reference Paper IEEE 2019
Crops Disease Diagnosing Using Image-Based Deep Learning Mechanism
Published in: 2018 International Conference on Computing and Network Communications (CoCoNet)
https://ieeexplore.ieee.org/document/8476914

Fused Convolutional Neural Network for White Blood Cell Image Classification

Blood cell image classification is an important part for medical diagnosis system. In this paper, we propose a fused convolutional neural network (CNN) model to classify the images of white blood cell (WBC). We use five convolutional layer, three max-pooling layer and a fully connected network with single hidden layer. We fuse the feature maps of two convolutional layers by using the operation of max-pooling to give input to the fully connected neural network layer. We compare the result of our model accuracy and computational time with CNN-recurrent neural network (RNN) combined model. We also show that our model trains faster than CNN-RNN model.

Reference Paper IEEE 2019
Fused Convolutional Neural Network for White Blood Cell Image Classification
Published in: 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)
https://ieeexplore.ieee.org/document/8669049

Deep Residual Network-Based Recognition of Finger Wrinkles Using Smartphone Camera

Iris, fingerprint, and three-dimensional face recognition technologies used in mobile devices face obstacles owing to price and size restrictions by additional cameras, lighting, and sensors. As an alternative, two-dimensional face recognition based on the built-in visible-light camera of mobile devices has been widely used. However, face recognition performance is greatly influenced by the factors, such as facial expression, illumination, and pose changes. Considering these limitations, researchers have studied palmprint, touchless fingerprint, and finger-knuckle-print recognition using the built-in visible light camera. However, these techniques reduce user convenience because of the difficulty in positioning a palm or fingers on the camera. To consider these issues, we propose a biometric system based on a finger-wrinkle image acquired by the visible-light camera of a smartphone. A deep residual network is used to address the degradation of recognition performance caused by misalignment and illumination variation occurring during image acquisition. Owing to the unavailability of the finger-wrinkle open database obtained by smartphone camera, we built the Dongguk finger-wrinkle database, including the images from 33 people. The results show that the recognition performance by our method exceeds in those of conventional methods.

Reference Paper IEEE 2019
Deep Residual Network-Based Recognition of Finger Wrinkles Using Smartphone Camera
Published in: IEEE Access ( Volume: 7 )
https://ieeexplore.ieee.org/document/8727862

BallTrack: Football ball tracking for real-time CCTV systems

The paper describes a deep network based system specialized for ball detection in long shot videos. System comprises of flexible detector and classical particle tracking. The core contribution is incorporation of hy-percolumn concept in the processing pipeline achieving real-time tracking on 12MPx videos. System achieves state-of-the-art results in ISSIA-CNR Soccer Dataset and its feasibility has been tested on 4 camera prototype system.

Reference Paper IEEE 2019
BallTrack: Football ball tracking for real-time CCTV systems
Published in: 2019 16th International Conference on Machine Vision Applications (MVA)
https://ieeexplore.ieee.org/document/8757880

Automatic Number Plate Recognition for a Smart Service Auto

Automatic Number Plate Recognition (ANPR) is a system that allows real time recognition of a vehicle license number plate. For a smart service auto, ANPR is helping promoting development, personalizing classic application and increasing productivity for clients and workers. The main role of ANPR, in the application, is to extract the characters of a vehicle license number plate from an image. A smart car service brings in addition to other services, an application through which the customer can see the repairs of the vehicle using only the license plate number extracted from a loaded image. It has been noticed that the technological development is growing, so it is considered that there is a need for development in this field too, and a smart car service is the best option for car services. By using ANPR to develop an application, it can ease the work of many employees as well as clients of car services.

Reference Paper IEEE 2019
Automatic Number Plate Recognition for a Smart Service Auto
Published in: 2019 15th International Conference on Engineering of Modern Electric Systems (EMES)
https://ieeexplore.ieee.org/document/8795201

Image Enhancement by Jetson TX2 Embedded AI Computing Device

mage enhancement can be tailored and optimized to use the full capacity of NVIDIA Jetson TX2 embedded AI computing device. We use parallel processing on CPU and GPU devices to achieve real-time video enhancement. We also outlined further improvement of image enhancement process by machine learning implementation

Reference Paper IEEE 2019
Image Enhancement by Jetson TX2 Embedded AI Computing Device
Published in: 2019 8th Mediterranean Conference on Embedded Computing (MECO)
https://ieeexplore.ieee.org/document/8760100

Finger Vein Identification Based On Transfer Learning of AlexNet

 finger vein-based validation systems are getting extra attraction among other authentication systems due to high security in terms of ensuring data confidentiality. This system works by recognizing patterns from finger vein images and these images are captured using a camera based on near-infrared technology. In this research, we focused finger vein identification system by using our own finger vein dataset, we trained it with transfer learning of AlexNet model and verified by test images. We have done three different experiments with the same dataset but different sizes of data. Therefore, we obtained varied predictability with 95% accuracy from the second experiment.

Reference Paper IEEE 2019
Finger Vein Identification Based On Transfer Learning of AlexNet
Published in: 2018 7th International Conference on Computer and Communication Engineering (ICCCE)
https://ieeexplore.ieee.org/document/8539256

A Vision Module for Visually Impaired People by Using Raspberry PI Platform

The paper describes a vision based platform for real-life indoor and outdoor object detection in order to guide visually impaired people. The application is developed using Python and functions from OpenCV library and, ultimately ported upon Raspberry PI3 Model B+ platform. Template Matching is selected as method. More precisely, a multi-scale version approach is proposed to reduce the processing time and also to extend the detection distance range for accurate traffic sign recognition in indoor/outdoor environment. The experimental part addressed the finding of the optimum values for template and image source dimension, as well as the scaling factor.

Reference Paper IEEE 2019
A Vision Module for Visually Impaired People by Using Raspberry PI Platform
Published in: 2019 15th International Conference on Engineering of Modern Electric Systems (EMES)
https://ieeexplore.ieee.org/document/8795205

Surface Defect Detection for Automated Inspection Systems using Convolutional Neural Networks

Optical inspection using unmanned aerial vehicles is a popular trend for detection of surface defects on industrial infrastructure, and full automation is the next step in order to improve potential and reduce costs. Binary classification of the obtained visual image data into defect and defect-free sets is one sub-task of these systems and is still often carried out either completely manually by an expert or by using pre-defined features as classifiers for automatic image post-processing. In contrast, deep convolutional neural networks (CNN) are able to perform both the feature extraction and classification tasks simultaneously by internal hierarchical learning. In this work, custom CNNs and a transfer-learned AlexNet are applied to an experimental data set with artificial defects in order to analyze suitability and required network depth for such surface inspections. Experiments are performed using a set of 2500 camera images total, yielding a classification accuracy of up to 99 % with a single CNN. Thereby, the amount of actual defects that are falsely classified as negative are minimized. Results proof the general effectiveness of the methodology and motivate the application to specific inspection tasks.

Reference Paper IEEE 2019
Surface Defect Detection for Automated Inspection Systems using Convolutional Neural Networks
Published in: 2019 27th Mediterranean Conference on Control and Automation (MED)
https://ieeexplore.ieee.org/document/8798497

Deep Learn Helmets-Enhancing Security at ATMs

Automatic Teller Machine (ATM) plays a vital role in our modern economic society. Around 1,30,000 ATM centers are functioning across India. A real-time intelligent video analytics offers advanced monitoring capabilities that gives sophisticated video surveillance to recognize the abnormal activities. Person wearing helmet in ATM center is one of the anomalous activity. In such scenario, an automatic helmet detection algorithm is required to, alert when the person is wearing helmet in ATM. The detection of helmet is obtained using Deep Learning Convolutional Neural Network (CNN) architecture such as VGGNET (Visual Geometry Group) and ALEXNET. The region of helmet is detected using (Region Convolutional Neural Network) RCNN with 15 layers. The performance of this technique has been tested on 880 test images out of 1880 images in a database. The parameters are chosen to compare the different mini batch size and epoch in ALEXNET.

Reference Paper IEEE 2019
Deep Learn Helmets-Enhancing Security at ATMs
Published in: 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS)
https://ieeexplore.ieee.org/document/8728493

Leave a Reply

Your email address will not be published. Required fields are marked *

4 × five =

This site uses Akismet to reduce spam. Learn how your comment data is processed.