Friday, 18 December 2015

How to enhance contrast of image using histogram equalization in MATLAB?


Ans=> To enhance contrast using histogram equalization we have to use the syntax “histeq”...An example is given below for reference...

a=imread('cameraman.tif');        %reading an image
b=histeq(a);                      %taking complement
figure,                           %opening figure window
subplot(1,2,1),subimage(a);title('original image');              %display gray image
subplot(1,2,2),subimage(b);title('hist equalized image');        %display histeq 
 image



 FOR MORE DETAILS
                  click here


SEGMENTATION OF PHEOCHROMOCYTOMAS IN CECT IMAGES USING MATLAB



          Segmentation of pheochromocytomas in Contrast-Enhanced Computed Tomography (CECT) images is an ill-posed problem due to the presence of weak boundaries, intratumoral degeneration, and nearby structures and clutter.
            Simultaneously co-segmenting common objects from a pair of images has drawn much attention. In such cases, the region-based LSMs (RLSMs) are more suitable by using statistical information of foreground and background regions. To improve the capability of segmenting objects having heterogeneous regions, local image information are widely considered in many localized RLSMs (LRLSMs).
Fig:REGION OF TUMOR

FOR DETAILS ABOUT LATEST IEEE PROJECTS





LICENSEPLATE NUMBER RECOGNITION USING MATLAB

                   The design of a new genetic algorithm (GA) is introduced to detect the locations of license plate (LP) symbols. An adaptive threshold method is applied to overcome the dynamic changes of illumination conditions when converting the image into binary. Connected component analysis technique (CCAT) is used to detect candidate objects inside the unknown image. A scale-invariant geometric relationship matrix is introduced to model the layout of symbols in any LP that simplifies system adaptability when applied in different countries. Moreover, two new crossover operators, based on sorting, are introduced, which greatly improve the convergence speed of the system.



                           Fig.Detected license plate number 

                            Most of the CCAT problems, such as touching or broken bodies, are minimized by modifying the GA to perform partial match until reaching an acceptable fitness value. The system is implemented using MATLAB and various image samples are experimented with to verify the distinction of the proposed system.


 FOR LATEST IEEE PROJECTS
                                                           click here

CONTENT BASED IMAGE RETRIEVAL FOR WEB APPLICATION USING IMAGE PROCESSING

CONTENT BASED IMAGE RETRIEVAL 

          "Content-based" means that the search analyzes the contents of the image rather than the metadata such as keywords, tags, or descriptions associated with the image. The term "content" in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because searches that rely purely on metadata are dependent on annotation quality and completeness. Having humans manually annotate images by entering keywords or metadata in a large database can be time consuming and may not capture the keywords desired to describe the image. The evaluation of the effectiveness of keyword image search is subjective and has not been well-defined. In the same regard, CBIR systems have similar challenges in defining success.

                                     Figure. An example of Image retrieval operation

Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval(CBVIR) is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching for digital images in large databases. Content-based image retrieval is opposed to traditional concept-based approaches 


FOR MORE DETAILS

                     click here

SEGMENTATION-BASED IMAGE COPY-MOVE FORGERY DETECTION SCHEME

MATLAB CODE FOR  IMAGE COPY-MOVE FORGERY DETECTION 
 An image with copy-move forgery (CMF) contains at least a couple of regions whose contents are identical. CMF may be performed by a forger aiming either to cover the truth or to enhance the visual effect of the image. Normal people might neglect this malicious operation when the forger deliberately hides the tampering trace. So we are in urgent need of an effective CMF detection (CMFD) method to automatically point out the clone regions in the image. And CMFD is becoming one of the most important and popular digital forensic techniques currently.



       Figure. a)Original Image,              b)Copy-move Forgery Image         c) Detection of CMF region       

         Digital images are easy to manipulate and edit due to availability of powerful image processing and editing software. Nowadays, it is possible to add or remove important features from an image without leaving any obvious traces of tampering. As digital cameras and video cameras replace their analog counterparts, the need for authenticating digital images, validating their content and detecting forgeries will only increase.


FOR MORE DETAILS

                             click here

How to convert original image into complement image using MATLAB?

How to convert original  image into complement image using MATLAB?...

Ans=> To convert original image into complement image we have to use the syntax “imcomplement”...An example is given below for reference...

a=imread('cameraman.tif');         %reading an image
b=imcomplement(a);                  %taking complement
figure,                                         %opening figure window
subplot(1,2,1),subimage(a);title('original image');              %display colour image
subplot(1,2,2),subimage(b);title('complement image');      %display gray image



 FOR MORE DETAILS
                                      click here

Monday, 14 December 2015

SVM - Support vector machine with MATLAB


First of all, let me start by saying that I am a student and I am working as a student assistant at Technische Universität Chemnitz presently. The project which was handed over to me was on object recognition & development of a working model. During this project I encountered that very few students actually know how to do image processing and most of all there is no place to find a good tutorial for beginners who do not want to go by theoretical knowledge and would want to get their hands dirty with MATLAB programming. So, my blog is targeted for those students who wants to work in this field and unfortunately are not able to find any relevant information on machine learning algorithms and programming with MATLab anywhere on Internet.
                                                                              Fig:SVM classification algorithm

 Image can be processed in plenty of ways and one of them which I will present to you is on machine learning algorithms which I will be using on MATLAB. I will keep my language as basic as possible for beginners to understand, no offense to professionals as we all were in a learning phase in our life.

 My tutorial will follow a very basic structure as follows:
Obtaining the Image Datasets - ( I will be using Caltech101 dataset )
Separate Training set and Test set images.
Creating Lables for SVM train to distinct class.
Training SVM
Classify Test set images.
At the moment, I will assume that you are familiar with the term machine learning algorithms. I have absolutely zero intention to discuss theory over here.

 Just for beginners,
 Training set - This set of images will be used to train our SVM.
 Test set - In the end of the svm training we will use these images for classification.
 Label - I will use Faces and Airplanes, these are two objects so we will give them two "labels".
 Classify - Distinguish our test set images.

 Finally, I will present you a simple code for classification using SVM. I have used the Caltech101 dataset for this experiment. Train dataset will consist of 30 images divided in two class and two labels will be provided to them. Code is very basic to be understood. Hope it helps. The program goes as follows:

 Prepatory steps:
 Training set - Create a folder with 15 "Faces" images and 15 "airplanes" images, this will be our dataset.
 Test set - Create another folder with random face and airplanes images, this will be our testset, basically we have to understand here is that if you use training set images as test set images then you will get 100% recognition performance.
 --------------------------------------------------------------------------------------------------------
 clc
 clear all

 % Load Datasets

 Dataset = 'absolute path of the folder'; 
 Testset = 'absolute path of the folder';

 % we need to process the images first.
 % Convert your images into grayscale
 % Resize the images

 width=100; height=100;
 DataSet = cell([], 1);

 for i=1:length(dir(fullfile(Dataset,'*.jpg')))

 % Training set process
 k = dir(fullfile(Dataset,'*.jpg'));
 k = {k(~[k.isdir]).name};
 for j=1:length(k)
 tempImage = imread(horzcat(Dataset,filesep,k{j}));
 imgInfo = imfinfo(horzcat(Dataset,filesep,k{j}));

 % Image transformation
 if strcmp(imgInfo.ColorType,'grayscale')
 DataSet{j} = double(imresize(tempImage,[width height])); % array of images
 else
 DataSet{j} = double(imresize(rgb2gray(tempImage),[width height])); % array of images
 end
 end
 end
 TestSet = cell([], 1);
 for i=1:length(dir(fullfile(Testset,'*.jpg')))

 % Training set process
 k = dir(fullfile(Testset,'*.jpg'));
 k = {k(~[k.isdir]).name};
 for j=1:length(k)
 tempImage = imread(horzcat(Testset,filesep,k{j}));
 imgInfo = imfinfo(horzcat(Testset,filesep,k{j}));

 % Image transformation
 if strcmp(imgInfo.ColorType,'grayscale')
 TestSet{j} = double(imresize(tempImage,[width height])); % array of images
 else
 TestSet{j} = double(imresize(rgb2gray(tempImage),[width height])); % array of images
 end
 end
 end

 % Prepare class label for first run of svm
 % I have arranged labels 1 & 2 as per my convenience.
 % It is always better to label your images numerically
 % Please note that for every image in our Dataset we need to provide one label.
 % we have 30 images and we divided it into two label groups here.
 train_label = zeros(size(30,1),1);
 train_label(1:15,1) = 1; % 1 = Airplanes
 train_label(16:30,1) = 2; % 2 = Faces

 % Prepare numeric matrix for svmtrain
 Training_Set=[];
 for i=1:length(DataSet)
 Training_Set_tmp = reshape(DataSet{i},1, 100*100);
 Training_Set=[Training_Set;Training_Set_tmp];
 end

 Test_Set=[];
 for j=1:length(TestSet)
 Test_set_tmp = reshape(TestSet{j},1, 100*100);
 Test_Set=[Test_Set;Test_set_tmp];
 end

 % Perform first run of svm
 SVMStruct = svmtrain(Training_Set , train_label, 'kernel_function', 'linear');
 Group = svmclassify(SVMStruct, Test_Set);

------------------------------------------------------------------------------------------------------------

 Finally, you can check you Image recognition performance by seeing Group variable. you can also try to give the same dataset and testset location and you will achieve 100% recognition. This is because the same image is being classified which you are using to train you svm.
FOR MORE DETAILS CLICK HERE

how to detect spoofing in iris, face and fingerprint using matlab?

SPOOFING DETECTION OF IRIS,FACE, AND FINGERPRINT


              Three relevant modalities in which spoofing detection has been investigated are iris, face, and fingerprint. Benchmarks across these modalities usually share the common characteristic of being image- or video-based. In the context of irises, attacks are normally performed using printed iris images or, more interestingly, cosmetic contact lenses. With faces, impostors can present to the acquisition sensor a photography, a digital video, or even a 3D mask  of a valid user. For fingerprints, the most common spoofing method consists of using artificial replicas created in a cooperative way, where a mold of the fingerprint is acquired with the cooperation of a valid user and is used to replicate the user’s fingerprint with different materials, including gelatin, latex, play-doh or silicone. 





Fig: Real and Fake Finger Print



FOR MORE DETAILS 



                                                                                                 CLICK HERE

how to convert RGB to binary image using matlab?

Query=> How to convert colour  image into binary image based on Threshold using MATLAB?...

Ans=> To convert colour image into binary image we have to use the syntax “im2bw”...An example is given below for reference...

a=imread('cameraman.tif');          %reading an image
b=im2bw(a,0.3);                          %coverting to binary based on Th value
figure,                                          %opening figure window
subplot(1,2,1),subimage(a);        %display colour image
subplot(1,2,2),subimage(b);        %display binary image
FOR MORE DETAILS

how to do dilate operation using matlab?

Query=> How to dilate binary image with structuring element in MATLAB?...

Ans=> To dilate binary image with structuring element we have to use the syntax “imdilate”...An example is given below for reference...

bw = imread('text.png');            %reading an image
se = strel('line',11,90);               %structure element
bw2 = imdilate(bw,se);              %dilate process
figure,                                        %opening figure window
subplot(1,2,1),subimage(bw);title('original image');        %display gray image
subplot(1,2,2),subimage(bw2);title('dilated image');        %display dilated image



FOR MORE DETAILS
                                          click here

how to detect traffic sign using matlab?

TRAFFIC SIGN DETECTION



               The majority of existing traffic sign detection systems utilize color or shape information, but the methods remain limited in regard to detecting and segmenting traffic signs from a complex background. In this paper, we propose a novel graph-based traffic sign detection approach that consists of a saliency measure stage, a graph-based ranking stage, and a multithreshold segmentation stage. Because the graph-based ranking algorithm with specified color and saliency combines the information of color, saliency, spatial, and contextual relationship of nodes, it is more discriminative and robust than the other systems in terms of handling various illumination conditions, shape rotations, and scale changes from traffic sign images. 

Fig: Flow of proposed traffic sign detection system. (a) Input image. (b) Graph design. (c) Ranking results with specified colors. (d) Segmentation results.
(e) Final results of traffic sign detection system.

FOR MORE DETAILS   

                                               CLICK HERE

how to read image using matlab?

Read an Image from File:

Ans=> To get an image from a file "uigetfile" command can be used where both the file name and pathname can be get simultaneously..... An example for getting an image is worked out below,....
[filename,pathname] = uigetfile('*.jpg;*.png;*.bmp;*.tif');   % to get an image of different image format
I = imread([pathname,filename]);  % imread will read the selected image
imshow(I);   %to view the image




FOR MORE DETAILS,

                                CLICK HERE

how to do erode operation in MATLAB?

Query=> How to erode binary image with structuring element in MATLAB?...

Ans=> To erode binary image with structuring element we have to use the syntax “imdilate”...An example is given below for reference...

originalBW = imread('circles.png');           %reading an image
se = strel('disk',11);                                     %structufre element
erodedBW = imerode(originalBW,se);       %erode process
figure,                                                          %opening figure window
subplot(1,2,1),subimage(originalBW);title('original image');   %display gray image
subplot(1,2,2),subimage(erodedBW);title('dilated image');      %display eroded image

FOR MORE DETAILS
                                     click here

Thursday, 10 December 2015

Find Image Rotation and Scale using matlab code

      Find Image Rotation and Scale:       

                This example shows how to automatically align two images that differ by a rotation and a scale change. It closely parallels another example titled Find Image Rotation and Scale. Instead of using a manual approach to register the two images, it utilizes feature-based techniques found in the Computer Vision System Toolbox to automate the registration process.
In this example, you will use detectSURFFeatures and vision.GeometricTransformEstimator System object to recover rotation angle and scale factor of a distorted image. You will then transform the distorted image to recover the original image.

Step 1: Read Image

Bring an image into the workspace.
%% Step 1: Read Image
% Bring an image into the workspace.
original = imread('cameraman.tif');
imshow(original);
text(size(original,2),size(original,1)+15, ...
    'Image courtesy of Massachusetts Institute of Technology', ...
    'FontSize',7,'HorizontalAlignment','right');

 

Step 2: Resize and Rotate the Image

%% Step 2: Resize and Rotate the Image

scale = 0.7;
J = imresize(original, scale); % Try varying the scale factor.

theta = 30;
distorted = imrotate(J,theta); % Try varying the angle, theta.
figure, imshow(distorted)


%%
% You can experiment by varying the scale and rotation of the input image.
% However, note that there is a limit to the amount you can vary the scale
% before the feature detector fails to find enough features.
 

Step 3: Find Matching Features Between Images

%% Step 3: Find Matching Features Between Images
% Detect features in both images.
ptsOriginal  = detectSURFFeatures(original);
ptsDistorted = detectSURFFeatures(distorted);

%%
% Extract feature descriptors.
[featuresOriginal,   validPtsOriginal]  = extractFeatures(original,  ptsOriginal);
[featuresDistorted, validPtsDistorted]  = extractFeatures(distorted, ptsDistorted);

%%
% Match features by using their descriptors.
indexPairs = matchFeatures(featuresOriginal, featuresDistorted);

%%
% Retrieve locations of corresponding points for each image.
matchedOriginal  = validPtsOriginal(indexPairs(:,1));
matchedDistorted = validPtsDistorted(indexPairs(:,2));

%%
% Show point matches. Notice the presence of outliers.
figure;
showMatchedFeatures(original,distorted,matchedOriginal,matchedDistorted);
title('Putatively matched points (including outliers)');

 

Step 4: Estimate Transformation:

%% Step 4: Estimate Transformation
% Find a transformation corresponding to the matching point pairs using the
% statistically robust M-estimator SAmple Consensus (MSAC) algorithm, which
% is a variant of the RANSAC algorithm. It removes outliers while computing
% the transformation matrix. You may see varying results of the
% transformation computation because of the random sampling employed by the
% MSAC algorithm.
[tform, inlierDistorted, inlierOriginal] = estimateGeometricTransform(...
    matchedDistorted, matchedOriginal, 'similarity');

%%
% Display matching point pairs used in the computation of the
% transformation matrix.
figure;
showMatchedFeatures(original,distorted, inlierOriginal, inlierDistorted);
title('Matching points (inliers only)');
legend('ptsOriginal','ptsDistorted');
 

Step 5: Solve for Scale and Angle

%% Step 5: Solve for Scale and Angle
% Use the geometric transform, TFORM, to recover
% the scale and angle. Since we computed the transformation from the
% distorted to the original image, we need to compute its inverse to
% recover the distortion.
%
%  Let sc = scale*cos(theta)
%  Let ss = scale*sin(theta)
%
%  Then, Tinv = [sc -ss  0;
%                ss  sc  0;
%                tx  ty  1]
%
%  where tx and ty are x and y translations, respectively.
%

%%
% Compute the inverse transformation matrix.
Tinv  = tform.invert.T;

ss = Tinv(2,1);
sc = Tinv(1,1);
scale_recovered = sqrt(ss*ss + sc*sc)
theta_recovered = atan2(ss,sc)*180/pi

%%
% The recovered values should match your scale and angle values selected in
% *Step 2: Resize and Rotate the Image*.

%% Step 6: Recover the Original Image
% Recover the original image by transforming the distorted image.
outputView = imref2d(size(original));
recovered  = imwarp(distorted,tform,'OutputView',outputView);

%%
% Compare |recovered| to |original| by looking at them side-by-side in a montage.
figure, imshowpair(original,recovered,'montage')


%%
% The |recovered| (right) image quality does not match the |original| (left)
% image because of the distortion and recovery process. In particular, the
% image shrinking causes loss of information. The artifacts around the edges are
% due to the limited accuracy of the transformation. If you were to detect
% more points in *Step 4: Find Matching Features Between Images*,
% the transformation would be more accurate. For example, we could have
% used a corner detector, |detectFASTFeatures|, to complement the SURF
% feature detector which finds blobs. Image content and image size also
% impact the number of detected features.
 
FOR MORE DETAILS CLICK HERE