Feature Extraction from Images using Python

0
6413

                                       Feature Extraction from Images

By Nishank Biswas

Feature extraction can be seen as the process of tracing out the underlying architectural framework of an image which may or may not be considered as the defining elements but somehow can be used as a trademark to identify the image dataset sharing same roots. The research and development in this domain is oriented to provide outrageous performance which can be achieved in the context of image compression and real time simulation based on big data. If we look from the broader perspective, it could be realize that in the context of feature extraction the main objective remains the identification of the combination of key features through which we could establish relationship between the images belonging to a particular type. As we move further into this context it could be realized that the definition of the key feature lies within a fuzzily defined boundary which is typically a function of type of the image, depth of perspective and objective of feature extraction. It although looks a bit handful but isn’t actually at least for a human brain, for example we hardly face any great difficulty on solving picture puzzle, as it can perform feature extraction based on normal reflexes of complex network present in the brain and this also somehow point towards the extent of the presence of  fuzziness which would be involved in feature extraction especially when it comes to imparting the same technique into the not so sophisticated computing system when compared with human brain. But nevertheless we can mimic the process of the feature extraction, to a great extent, which takes place in the brain by the use of the sophisticated machine learning algorithm to converge towards the most probable key features. These algorithms deploys computation over the pixel intensities over the gray scale format of the image to detect anomalies based on some threshold parameters to provide candidature for probable key features and to be precise it not only detects the key features but also learn the parameters which used could be used to describe the region of interest on which the feature belongs and is sometimes also termed as feature description parameters which can ultimately be used to track down the features from other concerned data sets. As we know that the complexity of the feature detection is the function of the type of the image along with the objective so if the synchronization of the algorithm for those can also be done with reference to those context then it would become more efficient than what would have been otherwise. To be specific, it would be more handy to perform predefined conventional image processing to polish out the crucial aspects of the image thereby probing the features than letting it upon the sophisticated algorithms pin pick the features by digging deep into the image which could be costlier but also at times it might be the demand of the situation to go for algorithmic intensive feature extraction especially where the key aspects are not in the clear vicinity.

Feature Extraction using Python

We can perform both the conventional image processing for controlled upbringing of features along with the feature extraction using sophisticated techniques by incorporating some suitable packages in Python. To start with controlled feature extraction, we are going to use a python package called scikit-image for image processing. Almost all of the images encountered in day today life belongs to the RGB format where the images are stored in a form of 3 dimensional matrix and the third dimension beneath the virtual image plane has 3 layers corresponding to Red, Green and Blue layer which holds crucial color combination parameters but is not actually crucial with respect to feature extraction. Therefore it would be quite a great idea to drop out the 3rd dimension thereby performing a virtual compression by keeping the most important part, the intensity distribution, intact in the shades of gray scale conversion.

from skimage import io

from skimage.color import rgb2gray

image = io.imread(‘image.png’)

gray_scale = rgb2gray(image)

io.show_images ( images = [image, gray_scale], titles = [“RGB Image”, “Grayscale Image”] )

The converted image into gray scale format inherits a crucial property of intensity distribution across the 2 dimension but at times it can become reasonable to avoid those precision and thereby reducing storage space by including flags which is either 0 or 1. The key property required to keep the essence of the image alive is the threshold value above or below which either flag is indicated and this value can be defined as the function of the intensity distribution in the gray scale image. To apply so called binarization we can use Otsu’s method to find the optimal threshold.

from skimage.filter import threshold_otsu

threshold = threshold_otsu (gray_scale)

binary = gray_scale > threshold

io.show_images ( images = [ gray_scale, binary ], titles = [ “Grayscale”, ”Otsu_binary”] )

One could always think of a situation where the image under analysis accommodates many of the irrelevant elements with random distribution of intensities which at times induces severe misleading to the network under image analysis and a way out for this consequence can be the incorporation of the blurring effect which ultimately scales down every element of the image to the same intensity level.

from skimage.filter import Gaussian_filter

blurred = gaussian_filter(gray_scale, sigma=20)

io.show_images ( images = [ gray_scale, blurred], titles = [ “Grayscale”, ”Blurred”] )

Several times we also encounter a situation in which image contains a lot of redundant information in the context of image analysis other than some defining parameters like edges, corners, etc and therefore it would be very handy to have a tool which amplifies these crucial aspects of image by suppressing the others and a way to implement that is illustrated using sobel function as

import scipy

from scipy import ndimage

image = scipy.misc.imread (‘image.jpg’)

image =image.astype (‘int32’)

# derivatives

dx=ndimage.sobel(image, 0)

dy=ndimage.sobel(image, 1)

mag=np.hypot(dx, dy)

# normalization

mag*= 255.0 / np.max(mag)

scipy.misc.imsave( ‘features_image.jpg’,mag)

After the controlled feature extraction from the image we can easily propagate the resulting image forward be it further processing or analysis using learnable network. Now we can move towards feature extraction on the images where features are not explicitly in the vicinity and therefore in order to perform analysis the features has to be detected along with the description by using some sophisticated algorithms. We will implement the algorithm provided in the package of opencv called ORB (Oriented FAST and Rotated BRIEF) which is basically a combination of FAST key point detector and BRIEF descriptor for feature extraction purpose. As we move a little deep to understand the working of this module we would be able to realize that firstly the key point detection is accomplished by the FAST algorithm which is then followed by detection of top N key points among them based on very popular Harris score along with BRIEF descriptor which also includes a significant modification than the original version so as to be compatible against what is called rotation in-variance thereby allowing flexibility in the orientation of the features. A typical use of this can be illustrated as

import numpy as np

import cv2

from matplotlib import pyplot as plt

image = cv2.imread(‘image.jpg’,0)

orb =cv2.ORB()

key_points = orb.detect(image,none)

key_points, description = orb.compute(image, key_points)

It is worthy to note that we can also extract features from the image where it is not in the clear vicinity or not specifically presentable with the use of neural network but it is also important to realise the possible extent of size of the data associated with a typical standard definition image therefore straight away putting the large collection of the data into the conventional neural network can be a significantly dicey. In order extract the features from the image with help of weights associated with the neurons of the network, we will have to pass the image file through the convolution channels to make the data processable by the ordinary architecture. After passing the image through the convolution network we will extract the weights of the neuron from the dense layer of the network which is nothing but the simple architecture hidden layer and that function can thus be used to extract the features out of the image. The implementation of the above feature extraction process can be shown in the context of image classification from MNIST data set using the convolution neural network training supported in nolearn package complemented by Theano implementation. In order to accomplish that we will firstly load the data set into the current environment as

def load_data():

file = ‘mnist.pkl.gz’

with gzip.open(file, ‘rb’) as f:

data = pickle.load(f,encoding=’latin1′)

Images, labels = data[0]

Images =Images.reshape((-1, 1, 28, 28))

labels = labels.astype(np.uint8)

return Images, labels

Images, labels = load_data()

Then we will proceed to define the convolution neural network by initializing various layers according to the complexity which is needed to be induced and it can be done as

# Network Initialization

net1 = NeuralNet(

layers=[(‘input’, layers.InputLayer),  (‘conv_layer’, layers.Conv2DLayer),

(‘maxpool’, layers.MaxPool2DLayer), (‘dropout’, layers.DropoutLayer),

(‘dense_layer’, layers.DenseLayer), (‘output’, layers.DenseLayer), ],

# Input layer Definition

input_shape=(None, 1, 28, 28),

# Convolution layer Definition

conv_layer_num_filters=32, conv_layer_filter_size=(4, 4),

conv_layer_nonlinearity=lasagne.nonlinearities.rectify,

maxpool_pool_size=(2, 2),    # layer maxpool1

dropout1_p=0.5,                   # dropout1

# dense layer Definition

dense_layer_num_units=256,  dense_nonlinearity=lasagne.nonlinearities.rectify,

# output

output_nonlinearity=lasagne.nonlinearities.softmax, output_num_units=10,

# optimization method parameters

update=nesterov_momentum, update_learning_rate=0.01, update_momentum=0.9,

max_epochs=5, verbose=1)

net. fit(Images, labels) # Training the Network

Having defined the network parameters it is now needed to be supplied with the training data to make it undergo supervised learning thereby adjusting the weights associated with the network accordingly to produce completely trained network. But in the context of feature extraction we would cut short the network for our purpose and apply feed forward propagation of new data up-to that portion in order to extract features from the weights and the same can be fabricated as

dense = layers.get_output(net.layers_[‘dense_layer’], deterministic=True)

input_var = net.layers_[‘input’].input_var

dense_feature = theano.function([input_var], dense)  # Function for feature extraction

Application of feature extraction

Security System

It is not difficult to guess the extent of data handling done by the sophisticated surveillance system which incorporate the use of facial pattern recognition and finger print detection system in order to perform matching with reference to the existing database. Therefore the deployment of plain vanilla pixel by pixel matching might not yield an appreciable performance and in that case the embedment of the feature extraction on the image data set would significantly enhance the performance of the system.

Image Search Engine

The primary objective of the image search engine is the real time processing and execution of the command but in order to handle billions of data and matching it with the one under evaluation would be rather impossible at least on the real time even on modern day server. The catch lying behind is the process of feature matching followed by feature extraction which extravagantly increases the performance with respect to what would have been otherwise.

Astronomical data storage

The advanced high resolution telescope used for astronomical analysis purpose stores data which are of humungous size, almost in GBs per second, and storing those images of large size in the database could be very expensive if not practically impossible. The way out to accomplish that also lies in the context of feature extraction which can also be seen as an excellent way of compressing the data thereby preserving the most crucial features only.

 

 

 

 

 

 

 

 

 

 

 

 

NO COMMENTS

LEAVE A REPLY