Anzeige

Understanding Resolution in Machine Vision

It is important to understand how the resolution of a lens or sensor is specified and what those specifications really mean for your system. When choosing a lens it is important to make sure its performance is well matched to the resolution requirements of your sensor.

Bild: Edmund Optics GmbH


Understanding a manufacturer’s specifications for a lens can greatly simplify the research and purchasing processes. In order to know how a lens works, it is critical to understand resolution, magnification, contrast, f/#, and how to read common performance curves including Modulation Transfer Function (MTF), Depth of Field (DOF), Relative Illumination, and distortion. Resolution is a measurement of an imaging system’s ability to reproduce object detail, and can be influenced by factors such as the type of lighting used, the pixel size of the sensor, or the capabilities of the optics. The smaller the object detail, the higher the required resolution.

Dividing the number of horizontal or vertical pixels on a sensor into the size of the object one wishes to observe will indicate how much space each pixel covers on the object and can be used to estimate resolution. However, this does not truly determine if the information on the pixel is distinguishable from the information on any other pixel. As a starting point, it is important to understand what can actually limit system resolution. An example can be shown in Figure 1A pair of squares on a white background. If the squares are imaged onto neighbouring pixels on the camera sensor, then they will appear to be one larger rectangle in the image (1A) rather than two separate squares (1b). In order to distinguish the squares, a certain amount of space is needed between them, at least one pixel. This minimum distance is the limiting resolution of the system. The absolute limitation is defined by the size of the pixels on the sensor as well as the number of pixels on the sensor.

Line Pair and Sensor Limitations

The relationship between alternating black and white squares is often described as a line pair. Typically, the resolution is defined by the frequency measured in line pairs per mm (lp/mm). A lens’s resolution is unfortunately not an absolute number. At a given resolution, the ability to see the two squares as separate entities will be dependent on grey scale level. The bigger the separation in the grey scale between the squares and space between them (Figure 1b), the more robust is the ability to resolve the squares. This grey scale separation is known as contrast (at a specified frequency). The spatial frequency is given in lp/mm. For this reason, calculating resolution in terms of lp/mm is extremely useful when comparing lenses and for determining the best choice for given sensors and applications.

The sensor is where the system resolution calculation begins. By starting with the sensor, it is easier to determine what lens performance is required to match the sensor or other application requirements. The highest frequency which can be resolved by a sensor, the Nyquist frequency, is effectively two pixels or one line pair. Table 1 shows the Nyquist limit associated with pixel sizes found on some highly used sensors. The resolution of the sensor, also referred to as the image space resolution for the system, can be calculated by multiplying the pixel size in μm by 2 (to create a pair), and dividing that into 1000 to convert to mm:

Sensors with larger pixels will have lower limiting resolutions. Sensors with smaller pixels will have higher limiting resolutions. With this information, the limiting resolution on the object to be viewed can be calculated. In order to do so, the relationships between the sensor size, the field of view, and the number of pixels on the sensor need to be understood.

Sensor size refers to the size of a camera sensor’s active area, typically specified by the sensor format size. However, the exact sensor proportions will vary depending on the aspect ratio, and the nominal sensor formats should be used only as a guideline, especially for telecentric lenses and high magnification objectives. The horizontal or vertical sensor size can be directly calculated from the pixel size and the horizontal or vertical number of active pixels on the sensor.

Resolving Two Squares. If the space between the squares is too small (a) the camera sensor will be unable to resolve them as separate objects (Bild: Edmund Optics GmbH)

 

Table 1: As pixel sizes get smaller the associated Nyquist limit in lp/mm rises proportionally.

Pixel Size (μm) Associated Nyquist Limit (lp/mm)
1.67 299.4
2.2 227.3
3.45 144.9
4.54 110.1
5.5 90.9

 

Anzeige

Empfehlungen der Redaktion

Das könnte Sie auch interessieren

Automatisierte Qualitätssicherungssysteme spielen auch bei der Herstellung von Elektrofahrzeugen eine immer wichtigere Rolle. Ein Beispiel dafür ist die Walter Automobiltechnik GmbH, ein deutscher Zulieferer für Teile und Zubehör für große OEM in der Automobil- und Motorradindustrie.‣ weiterlesen

www.creaform3d.com

Anzeige

Mastercam und Verisurf kamen bei der Michel Zerspanungstechnik aus dem nordrheinwestfälischen Salzkotten-Oberntudorf bei einem Projekt zum Einsatz. Verisurf wurde in das Projekt integriert, um wichtige Messdaten aus nicht mehrfach konvertierten CAD-Daten zu generieren und die Fertigteile im Vergleich zum CAD-Normal zu verifizieren.‣ weiterlesen

www.verisurf.com

Anzeige

KI-Verfahren sorgen für exzellente Erkennungsergebnisse bei der automatisierten Fehlerinspektion. Mit der Deep-Learning-basierten Technologie Anomaly Detection, die Bestandteil der Halcon Software von MVTec ist, können Unternehmen die Inspektionsprozesse jetzt deutlich vereinfachen und effizienter gestalten.‣ weiterlesen

www.mvtec.com

Anzeige

Vision Components stellt eine Reihe neuer Kameramodule vor, die hohe Auflösung und schnelle Bildraten mit der MIPI-CSI2-Schnittstelle kombinieren. Dabei kommen High-End Sensoren der Pregius- und Starvis-Serie von Sony zum Einsatz. Zudem stehen MIPI-Kameramodule mit Sensoren zur Verfügung, die nativ keine MIPI-Schnittstelle unterstützen.‣ weiterlesen

www.vision-components.com

Anzeige

Die autonome Handhabung biegeschlaffer Bauteile mit Robotern erfordert sowohl die genaue Lokalisierung als auch die Berücksichtigung von Verformungen. Das ISW forscht an einem Ansatz der 3D-Stereovision mit einem Simulationsmodell kombiniert, um eine Lokalisierung auch bei starken Verformungen und Konfigurationsänderungen des Bauteils zu ermöglichen.‣ weiterlesen

www.isw.uni-stuttgart.de

Anzeige

Während der Corona-Pandemie sind auch die Kundenkontakte in vielen Branchen stark reduziert oder gar nicht mehr möglich. Wie sich Außendienst-Mitarbeiter auf einen persönlichen Kontakt derzeit freuen, schildert Monique vom Stein, Produktmanagement Positionssensoren bei ifm.‣ weiterlesen

www.ifm.com

Anzeige
Anzeige
Anzeige
Anzeige
Anzeige
Anzeige