I am unable print colored text - python-3.x

I am unable to print the text in Orange colored.I identified the edges of the image and then printed a text on it.
%matplotlib inline
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('ind_maharashtra.png',0)
edges = cv2.Canny(img,100,20)
img1 = cv2.imread('Edged_img.jpg',0)
cv2.putText(img1,'JAI MAHARASHTRA !!',(70,150), font, 0.7,(255,69,0),2,cv2.LINE_8)
cv2.imshow('Maharashtra Map',img1)
#cv2.imshow('Maharashtra Map',img)

The problem is that the image on which you are trying to draw ( the image named img1) is a gray-scale image since the 2nd argument of cv2.imread is 0 in the following line:
img1 = cv2.imread('Edged_img.jpg',0)
You have 2 options to fix this issue. First one is to load the image as a color image as follows:
img1 = cv2.imread('Edged_img.jpg')
Alternatively, if you want your canvas to have a gray-ish look, you can just replicate the single channel to form a 3 channel image as follows:
img1 = cv2.imread('Edged_img.jpg', 0)
img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2BGR)

You are loading your jpg in grayscale, so you will only be able to write grayscale to img1
OpenCV Imread docs
change this line
img1 = cv2.imread('Edged_img.jpg',0)
img1 = cv2.imread('Edged_img.jpg',1)
As you can see from the above linked docs, using these numbers is OK but you are actually setting a flag, so you could use the flag definition to make your code clearer. Coincidentally, if you had used the flags you would likely not have had this issue.
You can change your line to
img1 = cv2.imread('Edged_img', cv2.IMREAD_COLOR)
Look how much clearer, and understandable that is. Especially when you come back to this code/hand it over to another developer in a few months time.


detecting similar objects and cropping them from the image

I have to extract this:
from the given image:
I tried contour detection but that gives all the contours. But I specifically need that object in that image.
My idea is to:
Find the objects in the image
Draw bounding box around them
Crop them and save them individually.
I am working with opencv and using python3, which I am fairly new to.
As seen there are three objects similar to given template but of different sizes. Also there are other boxes which are not the area of interest. After cropping I want to save them as three separate images. Is there a solution to this situation ?
I tried multi-scale template matching with the cropped template.
Here is an attempt:
# import the necessary packages
import numpy as np
import argparse
import imutils
import glob
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-t", "--template", required=True, help="Path to template image")
args = vars(ap.parse_args())
# load the image image, convert it to grayscale, and detect edges
template = cv2.imread(args["template"])
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
template = cv2.Canny(template, 50, 200)
(tH, tW) = template.shape[:2]
image = cv2.imread('input.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# loop over the scales of the image
for scale in np.linspace(0.2, 1.0, 20)[::-1]:
# resize the image according to the scale, and keep track
# of the ratio of the resizing
resized = imutils.resize(gray, width = int(gray.shape[1] * scale))
r = gray.shape[1] / float(resized.shape[1])
# if the resized image is smaller than the template, then break
# from the loop
if resized.shape[0] < tH or resized.shape[1] < tW:
# detect edges in the resized, grayscale image and apply template
# matching to find the template in the image
edged = cv2.Canny(resized, 50, 200)
res = cv2.matchTemplate(edged, template, cv2.TM_CCOEFF)
loc = np.where( res >= 0.95)
for pt in zip(*loc[::-1]):
cv2.rectangle(image, int(pt*r), (int((pt[0] + tW)*r), int((pt[1] + tH)*r)), (0,255,0), 2)
Result that I am getting:
Expected result is bounding boxes around all the post-it boxes
I'm currently on mobile so I can't really write code, but this link does exactly what you're looking for!
If anything isn't clear I can adapt the code to your example later this evening, when I have acces to a laptop.
In your case I would crop out the content of the shape (the post-it) and template match just on the edges. That'll make sure it's not thrown off by the text inside.
Good luck!

Show resized image with cv2

Windows 10
I reduced the code to the very core and the problem still exists.
import cv2
img = cv2.imread('imm.jpg')
In this way it works as it should: it shows an image with its resolution (4k). But if I resize it in this way:
import cv2
img = cv2.imread('imm.jpg')
res = cv2.resize(img, (160,90))
it creates in the toolbar a new window, whose name's image, as it should but I cannot focus it and see the result.
What's the problem?

Highlighting specific text in an image using python

I want to highlight specific words/sentences in a website screenshot.
Once the screenshot is taken, I extract the text using pytesseract and cv2. That works well and I can get text and data about it.
import pytesseract
import cv2
if __name__ == "__main__":
img = cv2.imread('test.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
result = pytesseract.image_to_data(img, lang='eng', nice=0, output_type=pytesseract.Output.DICT)
Using the results object I can find needed words and sentences.
The question is how to go back to the image and highlight those word?
Should I be looking at other libraries or there is a way to get pixel values and then highlight the text?
Ideally, I would like to get start and end coordinates of each word, how can that be done?
You can use pytesseract.image_to_boxes method to get the bounding box position of each character identified in your image. You can also use the method to draw bounding box around some specific characters if you want. Below code draws rectangles around my identified image.
import cv2
import pytesseract
import matplotlib.pyplot as plt
filename = 'sf.png'
# read the image and get the dimensions
img = cv2.imread(filename)
h, w, _ = img.shape # assumes color image
# run tesseract, returning the bounding boxes
boxes = pytesseract.image_to_boxes(img)use
print(pytesseract.image_to_string(img)) #print identified text
# draw the bounding boxes on the image
for b in boxes.splitlines():
b = b.split()
cv2.rectangle(img, ((int(b[1]), h - int(b[2]))), ((int(b[3]), h - int(b[4]))), (0, 255, 0), 2)

How to detect specific spots from image and crop in multiple images in python

I am trying to detect some spots from the image and save it in multiple images after cropping.
I just want to crop wbc.
Script: I am trying but not getting the idea.
import cv2
import numpy as np;
# Read image
im = cv2.imread("C:/Users/Desktop/MedPrime_Tech_Work/tag-145218-Default-10X.jpg", cv2.IMREAD_GRAYSCALE)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create()
# Detect blobs.
keypoints = detector.detect(im)
print (keypoints)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
My code is working but the problem is how to detect spots? as in shown in the image.
Thanks in advance. Please suggest something
Error Getting
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-11-2754358a7c43> in <module>()
----> 1 import PyDIP as dip
2 import PyDIP.PyDIPviewer as dv
ModuleNotFoundError: No module named 'PyDIP'
I am trying to install PyDIP but not unbale.
I don't have OpenCV here, but use PyDIP instead (I'm an author).
The detection in this case is fairly trivial because of the different color and sizes of the cells. Spaceman suggested to use the HSV color space. That is a good idea, but since this is so simple, I'm going to just use the individual green and blue channels instead. The "wbc" cells are very dark in the green channel, but not in the blue one. Everything that is black (outside of the field of view, and your drawings) is dark in both channels. So detecting the "wbc" and the "platelet" cells is a matter of finding dark regions in the green channel that are not dark in the blue. Next, a simple size criterion will exclude the "platelet" cells.
Finally, to crop, I group nearby detections (as these seem to belong together), and crop the groups from the image:
import PyDIP as dip
img = dip.ImageReadTIFF('/home/cris/tmp/cells')
# detect wbc
mask = dip.Erosion(img.TensorElement(2), dip.SE(7, 'elliptic'))
wbc = (img.TensorElement(1) < 50) & (mask > 50) # threshold green and blue channels, exact threshold values don't matter, color differences are obvious
wbc = dip.Closing(wbc, dip.SE(15, 'elliptic')) # fills small holes
wbc = dip.Opening(wbc, dip.SE(25, 'elliptic')) # removes small cells
# group and find bounding boxes
labs = dip.Label(dip.BinaryDilation(wbc, 2, 50)) # 50 is the half the distance between detections that belong together
labs *= wbc
m = dip.MeasurementTool.Measure(labs, features=['Minimum','Maximum'])
# crop
margin = 10 # space to add around detections when cropping
for obj in m.Objects():
left = int(m[obj]['Minimum'][0]) - margin
right = int(m[obj]['Maximum'][0]) + margin
top = int(m[obj]['Minimum'][1]) - margin
bottom = int(m[obj]['Maximum'][1]) + margin
crop = img[left:right, top:bottom]
dip.ImageWriteTIFF(crop, '/home/cris/tmp/cells%d'%obj)
This leads to the following small images:
When you say your code is working, I assume that means you're already detecting what you want to detect, and your question about cropping is just that you want to get images of the detected spots.
If I got that right, then remember that OpenCV images are just numpy arrays, so all you need is a subset of that array. Does this do what you want?
blob_images = list()
for kp in keypoints:
x, y, r = *kp.pt, kp.size
crop = im[y-r:y+r, x-r:x+r]
Now you should have a list of cropped images. From there, you can filter them so you only get white blood cells or save them with cv2.imwrite() or do whatever you want.
Be warned that crop is just a view into the im array, not a separate copy. This saves memory, but modifying one will modify the other. Use im[y-r:y+r, x-r:x+r].copy() if you need to decouple them.
I try to your able image to detect WBC which are in violet color.
import cv2
import numpy as np;
# Read image
im = cv2.imread("image.jpg")
hsv = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)
I tried to convert it color space since i assume wbc's are always going to be in violet color. Since we have to detect object by color it better to convet it to other HSV color space you can read below link to know what HSV is below is output.
gray_image = cv2.cvtColor(hsv, cv2.COLOR_BGR2GRAY)
Here 210 is the threshold grayscale value for getting the spots which are white in color i.e WBC's
im2, contours, hierarchy = cv2.findContours(thresh_img,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
this image will all WBC's in black color. By using findContours we got the contours which are black in color.
for c in contours:
if area>10 and area<2500:
(x,y),radius = cv2.minEnclosingCircle(c)
center = (int(x),int(y))
radius = int(radius)
img = cv2.circle(im,center,radius,(0,255,0),10)
x, y, w, h = cv2.boundingRect(c)
if w>20 and h>20:
roi = original_image_copy[y:y+h, x:x+w]
cv2.imwrite("images/roi"+str(count)+".jpg", roi) # make sure you have folder `images` in same folder in which you are running code.
# cv2.drawContours(im, [c], 0,(0,255,0), 10)
I know this wont be the perfect answer but you can tryto remove some noise in corner of the circle by clicking image in different way or you can use image processing function like morphology operattions like dilation.

How to remove glare from a face opencv

I'm trying to posterize an image using opencv. Here's the input
I added the following script that i found here to get a posterize effect:
import numpy as np
import cv2
im = cv2.imread('messi5.jpg')
n = 2 # Number of levels of quantization
indices = np.arange(0,256) # List of all colors
divider = np.linspace(0,255,n+1)[1] # we get a divider
quantiz = np.int0(np.linspace(0,255,n)) # we get quantization colors
color_levels = np.clip(np.int0(indices/divider),0,n-1) # color levels 0,1,2..
palette = quantiz[color_levels] # Creating the palette
im2 = palette[im] # Applying palette on image
im2 = cv2.convertScaleAbs(im2) # Converting image back to uint8
Here's the output I'm getting by setting n=5 which is close to my desired output
However the glaring on the original image is affecting the final output( I need an output with a face having almost the same colour). How do I remove glaring from the original input