archive-com.com » COM » E » EFG2.COM Total: 878 Choose link from "Titles, links and description words view": Or switch to
"Titles and links view". |

discrete masks so that I can generate masks of other sizes N and sigma Let s talk about the generic case for the LoG operator It stands for Laplacian of the Gaussian which it literally is The intention is to low pass the image with a Gaussian of some radius and then take the Laplacian of that image to look at the highest frequency data that remains below that cut off Thus A simple Gaussian like this 1 1 G 1 1 would blur the image some Naturally you can use larger kernels Taking the Laplacian of that image using the standard Laplacian kernel 0 1 0 1 4 1 L 0 1 0 will result in an image that holds the second derivative of the blurred image One problem is that half of the image is negative so we tend to add 128 in this step and often scale the values down since the Laplacian by itself is a wonderful noise amplifier You can now look at the zero or 128 now crossings in the image to produce your contour lines Or you can combine the two kernels together Remember convolution is commutative I o G o L I

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/0929b.txt (2016-02-14)

Open archived version from archive

consecutive windows The algorithm Huang s algorithm consists of the following steps 1 Store the first window data in a n x n array Window Compute the local histogram and find the median using the quicksort algorithm Compute the number ltmdn of poins whose intensity is less than that of the median 2 Move on to the next window updating the n elements Update the histogram decrease the values for

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/0214.txt (2016-02-14)

Open archived version from archive

To jcr6 aol com Newsgroups sci image processing Kent Curtis wrote I am experimenting with grayscale image enhancement using sharpening kernels such as 1 1 1 K 1 9 1 1 1 1 Using a tool such as Paint Shop Pro I find that for some though not all images unsharp mask USM produces more aesthetic images than simply applying a sharpening convolution kernel I m familiar with the concept of unsharp masking but have not found much discussion of actual implementations Can anyone point me to source code of examples of implementing USM Thanks in advance Kent Curtis That kernel that you showed above is really a high pass filter You are essentially subtracting a blurred version of the image from the original with the following kernels 1 1 1 1 1 1 1 a simple blur with a negative weight for subtraction 1 1 1 and 0 0 0 10 0 1 0 weight on the original 0 0 0 Thus you could use larger blur kernels than the simple blur above The parameters for a typical unsharp mask are 1 radius of the Gaussian for the blur the unsharp image to be subtracted 2 weight of the

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/0929a.txt (2016-02-14)

Open archived version from archive

that simple the results would be no different to high pass linear filtering The orginators of the technique for scientific imaging at AAO have a web page which contains some technical details and the implication is that the final image is the result of combining in a multiplicative fashion the original image with a blurred negative copy You could always divide by the Gaussian Blur instead of subtract That will make a difference This comes dangerously close to bringing in the whole discussion of Gamma but since we re talking about enhancing high frequencies the goal is to make things perceptually sharper not keep exact intensities So let s not There are really two common uses for the Unsharp Mask 1 Reduce the dynamic range of the image so that a much broader range of initial intensities are now visible Especially conversion from Film or Plates down to prints 2 Slightly enhance high frequencies and possibly clip a little at black white Sharpen after some kind of blur operation has reduced the noise More like an HF Boost Remember that even though an Unsharp Mask operation is technically a high pass filter the roll off on the Gaussian is so

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/1025.txt (2016-02-14)

Open archived version from archive

Does anyone have experience with an optical based on image processing auto focus function in light microscopy I use a simple one dimensional Laplace filter NIH Image to get the the maximum of the greyscale dynamic of the picture cell culture but it is slow and does not work very accurately I am grateful for any information Ralf There are two common ways to define in focus As opposed to INFS INFOCUS the company 1 That state where the image is the sharpest how do you define sharpest below or 2 That state where the image contrast is greatest When I manually focus a camera I try to find some lines or edges in the center of the field of view and make them sharp If you consider being out of focus a blur function the edges are skinniest when the image is focussed So how do you implement this You could have several 1 D FFTs taking a few milliseconds each and maximize the power in the high frequencies Some horizontal some vertical hopefully one or more will be an interesting area of the image This is one way to define sharpest You could have several lines where we

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/0913c.txt (2016-02-14)

Open archived version from archive

0 to ImageHeight 1 do begin For all pixel in one row for x 0 to ImageWidth 1 do begin Is there a point there or not If not just skip the pixel if IsPoint x y then begin Now you need to iterate for one of the unknown variables theta to be able to determine the other unknown r for theta 0 to 360 1 do begin r x cos theta PI 360 y sin theta PI 360 Plot the finding theta r into an array Ignore negative values trust me its ok if r 0 then Inc HoughArray theta r end end end end The size of the array can be calculated as HoughArray Array MaxTheta MaxR of integer where MaxTheta due to the loop naturally is 360 we determine that ourselves and MaxR Sqrt ImageHeight ImageHeight ImageWidth ImageWidth MaxR 2 2 ImageHeight ImageWidth V best regards Kim Madsen kbm optical dk In article brian crowley wrote Martin Philpott wrote I understand that the Hough transform is the conversion from x y space to r theta space My question is from where do you measure r and theta It makes a difference I haven t actually programmed this myself but my understanding is that you are trying to correlate the angle of the maximum gradient of each pixel with the adjacent pixels In practice the gradients for some subsample of the angles maybe 4 or 8 is determined and the maximum chosen Do you consider the problem from the centre of the image or from one corner or from all points within the image All the pixels are being considered with themselves as the origin but part of the algorithm is throwing away all those pixels with no significant gradients that is picking out pixels that are part

Original URL path: http://www.efg2.com/Lab/Library/UseNet/1999/1119a.txt (2016-02-14)

Open archived version from archive

news 392A7E8C 164BD689 cad ntu kpi kiev ua Can someone tell me about subj Well not that much complicate Here s what to do at each pixel once you ve the real mapping coordinates let s say u and v f0 1 frac u 1 frac v f1 frac u 1 frac v f2 frc u frac v f3 1 frac u frac v where frac t is the fractionnal

Original URL path: http://www.efg2.com/Lab/Library/UseNet/2000/0523.txt (2016-02-14)

Open archived version from archive

27 NNTP Posting Host 65 214 97 102 X Trace 1014759804 reader1 ash ops us uu net 14445 65 214 97 102 Xref newsmst01 news prodigy com comp graphics algorithms 131670 Phil O Connor wrote I m having trouble finding a practical approach to simulating a fish eye lens camera A normal rectilinear lens obeys the law r f tan theta where r is the radial distance of an image

Original URL path: http://www.efg2.com/Lab/Library/UseNet/2002/0226.txt (2016-02-14)

Open archived version from archive