What can you see with a 67MP camera?

Remember when machine vision pioneers got stuff done with VGA sensors at 0.3MP? And the industry got really excited with 1MP sensors? Moore’s law keeps driving capacities and performance up, and relative costs down. With the Teledyne e2v Emerald 67MP sensor, cameras like the Genie Nano-10GigE-8200 open up new possibilities.

12MP sensor image – Courtesy Teledyne DALSA
67MP sensor image – Courtesy Teledyne DALSA

So what? 67MP view above right doesn’t appear massively compelling…

Well at this view, without zooming in, we’d agree….

But at 400% zoom, below, look at the pixelation differences:

Both images below show the same target region, with the same lighting and lens, and each zoomed (with Gimp) to 400%. There is so much pixelation in the 12MP image to raise doubts about effective edge detection on either the identifying digits (33) or for the metal-rimmed holes. Whereas the 67MP image has far less pixelation, thereby passing a readily usable image to the host for processing. How much resolution does your application require?

12MP zoomed 400%
67MP zoomed 400%

Important “aside”: Sensor format and lens quality also important

Sensor format refers to the physical size of the sensor and the pixel shape and pixel density. Of course the lens must physically mount to the camera body (e.g. S, C, M42, etc.), but it must also create an image circle that appropriately covers the sensor’s pixel array. The Genie Nano-10Gige-8200 uses the Teledyne e2V Emerald 67M packs just over 67 million pixels, each square pixel just 2.5 µm wide and high, onto a CMOS sensor measuring only 59mm x 59mm.

Consider other good quality cameras and sensors, with pixel sizes in the 4 – 5 µm range, which leads to EITHER fewer pixels overall in the same size sensor array; OR to a much larger sensor to accommodate more pixels. The former may limit what can be accomplished with a single camera. The latter would necessarily make the camera body larger, the lens mount larger, and the lens more expensive to manufacture.

The lens quality, typically expressed via the Modulation Transfer Function (MTF), is also important. Not all lenses are created equal! A “good” quality lens may be enough for certain applications. For more demanding applications, one would be wasting a large format sensor if the lens’ performance fails below the sensor’s capabilities.

Two different lenses were used to take the above images, both fitting the sensor size. However the right image was taken with a lens designed for smaller pixels versus the left. – Courtesy Teledyne DALSA

The high-level views of the test chart above tease at the point we’re making, but it really pops if we zoom in. Look at the difference in contrast in the two images below!

Lens nominally a fit for the sensor format and mount type, but NOT designed for 2.5 µm pixels.
Lens designed for 2.5 µm pixels.

The takeaway point of this segment is lensing matters! The machine vision field benefits users tremendously with segmented sensor, camera, lensing, and lighting suppliers. Even within the same supplier’s lineup, there are often sensors or lenses pitched at differing performance requirements. Consider our Knowledge Base guide on Lens Quality Considerations. Or call us at 978-474-0044.


Another example:

Below see the same concentric rings of a test chart, under the same lighting. The left imaged was obtained with a good 12MP sensor and good quality lens matched to the sensor format and pixel size. The right imaged used the 67MP sensor in the Genie-Nano-10GigE-8200, also with a well-matched lens.

12MP sensor, zoomed 1600%
67MP sensor, zoomed to same FOV

If you need a single-camera solution for a large target, with high levels of detail, there’s no way around it – one needs enough pixels. Together with a well-suited lens.

Teledyne DALSA 10GigE Genie Nano
Genie Nano 10GigE 8200 – Courtesy Teledyne DALSA

The Genie Nano 10GigE 8200, in both monochrome and color versions, is more affordable than you might think.

Once more with feeling…

Which of the following images will lead to the more effective outcomes? Choose your sensor, camera, lens, and lighting accordingly. Call us at 978-474-0044. Our sales engineers love to create solutions for our customers.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Prophesee event-based vision – a new paradigm

We don’t use terms like paradigm-shift lightly. Event-based vision (EBV) is shaking things up just as much as the arrival of frame-based and line-scan imaging once did . It’s that different.

Event-based vision applications areas – Courtesy Prophesee

Applications examples… and just enough about concepts

This informational blog skews towards applications examples, with just enough about concepts and EBV technology to lend balance. Event-based vision is so new and so different than previous vision technologies. We believe our readers may appreciate an examples-driven approach to understanding this radically new branch of machine vision.

Unless you’re an old hand at event-based vision (EBV) …

…and as of this writing in Summer 2025 few could be, let’s show a couple of short teaser videos before we explain the concepts or go deeper on applications.

Example 1: High-speed multiple-object counting and sizing

The following shows a use of event-based imaging to both count particles or objects and to estimate their size. All in a high-speed multiple concurrent target environment.

Courtesy Prophesee

Example 2: Eye-tracking – only need to track pupil changes

Now consider eye-tracking: in the video below, we see synchronized side-by-side views of the same scene. The left side was obtained with a frame-based sensor. The right side used a Prophesee event-based sensor. Why “waste” bandwidth and processing resources separating a pupil from an iris, eyelids, and eyebrows, when the goal is eye-tracking?

Courtesy Prophesee

Radically increased speed; massively less data

Prophesee Metavision’s sensor tracks “just” what changes in the field of view, instead of the frame-based approach of reading out whole sensor arrays, transmitting voluminous data from the camera to the host, and algorithmically looking for edges, blobs, or features.

Contact us

Example 3: Object tracking

Not unlike the eye-tracking example above, but this time with multiple moving targets – vehicular traffic. With frame-based vision, bandwidth and processing has to repeatedly process static pavement, light poles, guard rails, and other “clutter” comprising 80% of the field of view. But with event-based vision the sensor only detects and transmits the moving vehicles.

Whether counting traffic for congestion planning/reporting, or collision avoidance abord a given vehicle, we don’t care what the vehicle looks like – only that it’s there and moving along a certain trajectory and at a detected speed.

Example 4: Surveillance and security

Prophesee named this video “Optical Flow Crowd”, which is best understood in the context of security and surveillance, we suggest. It’s not unlike the vehicular flow example above, except that cars and trucks mostly stay in-lane. Whereas pedestrians move at diverse angles. And their arm, leg, head, and torso movements also convey information.

The (computed) vector overlays indicate speed and directional changes, important to reveal potentially dangerous actions taken or likely to emerge. For example, does a raised arm indicate a handgun or knife being readied, or is it just a prelude to scratching an itchy nose? Is there a pursuer turning towards another pedestrian, to which a nearby policeman should be alerted?

Example 5: Vibration monitoring and preventative maintenance

Motorized equipment is typically service either on a preventative maintenance schedule and/or a break-fix approach, depending on costs, legal liabilities and risks, etc. What if one could inexpensively identify vibration patterns that reveal wearing belts or bearings before a breakdown and before there is preventable collateral damage to other components?

Courtesy Prophesee

Enough with the examples – how can I get event-based sensors?

1stVision is pleased to represent Prophesee with a wide range of sensors, evaluation kits, board-level and housed cameras, and an SDK designed for event-based vision applications.

Built on neuromorphic engineering principles inspired by the brain’s neural networks and human vision, Prophesee products are surprisingly affordable, enabling new fields of event-based vision. Or improving on frame-based vision applications that may be done better, faster, or less-expensively using an event-based approach.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

T2IR – Trigger to Image Reliability

Machine vision systems are widely accepted for their value-add in so many fields. Whether in manufacturing, pick-and-place, quality control, medicine, or other areas, vision systems either save money and/or create competitive advantage.

From “black box” uncertainties to confident outcomes using T2IR – Courtesy Teledyne DALSA

BUT deploying new systems can be tricky

For experienced machine vision system builders, sometimes the next new system is either “simple” in complexity, or a minor variant on previously built complex systems in which one has good confidence. Deployment, systems acceptance testing and validation may be straightforward, with little doubt about correctness or reliability. Congratulations if that’s the case.

For new undertakings, even for experienced vision system builders, there’s often some uncertainty in the “black box” nature of a new system. Can I trust the results as accurate? Am I double counting? Am I missing any frames? If the system overloads, how can I track which inspections were skipped while the system recovers? How to troubleshoot and debug? And rising QA standards are pushing us to verify performance – yikes!

T2IR: Trigger to Image Reliability can open up the black box

Take the mystery out of system deployment, tuning, and commissioning. T2IR’s powerful capabilities essentially open up the black box. And for users of many Teledyne DALSA cameras and framegrabber products, T2IR is free!

Two minute overview of T2IR: Trigger to Image Reliability – Courtesy Teledyne DALSA

Bundled with many Teledyne DALSA products

The T2IR features are available at no cost for customers who purchase (or already own) any of:

Cameras: Genie Nano, Linea, Piranha, and others

Frame Grabbers: Xtium and Xcelera

Software: Sapera LT SDK (for both Windows and Linux)

T2IR in a nutshell

T2IR, Trigger to Image Reliability, is a set of hardware and software features that help improve system reliability. It’s offered as a free benefit to Teledyne DALSA customers for many of their cameras and framegrabbers. It helps you get inside your system to audit and debug image flow.


Example One: Coping with unexpected triggers

Consider the following diagram:

Courtesy Teledyne DALSA

To the left of the illustration we see the expected normal trigger frequency. Each trigger, perhaps from a mechanical event and an actuator sending an electronic pulse, triggers the next camera image, readout, and processing, at intervals known to be sufficient for complete processing. No lost or partial images. No overtriggering. All as expected.

But in the middle of the illustration we see invalid triggers interspersed with valid ones. What would happen in your system if you didn’t plan for this possibility? Lost or partial images? Risk of shipping bad product to customers due to missed inspections? Damage to your system? What’s the source of the extra triggers? How to debug the problem if found during design and commissioning? How to recover if system is already in production?

With T2IR, the Sapera API, may be programmed to detect the invalid triggers that are out of the expected timing intervals. An event can be generated for the application to manage. The application might be programmed to do something like “stop the system”, “tag the suspect images”, “note the timestamps”, or otherwise make a controlled recovery.

Key takeway: instead of the system just blundering along with the invalid triggers undetected, with possible grave consequences, T2IR can help create a more robust system that recovers gracefully.

Courtesy Teledyne DALSA

Example Two: Tracking and Tracing Images – Tagging with timestamps

Consider the challenges of high-speed materials handling, let’s say 3000 parts per minute. Some such systems require multiple cameras, lets say 4 per object, to make an accept, reject, or re-inspect decision on each component. For this hypothetical system, that would be 12,000 images per minute to coordinate!

Correlating up to thousands of images per minute – Courtesy Teledyne DALSA

The illustration above shows the timestamps one might assign with T2IR, based on ticks in microseconds, according to known or believed belt motion speed. But what if results seem corrupted using that method, or if there are doubts about whether speed is constant or variable?

With the Xtium frame grabber, one can use T2IR to choose among alternate timestamp bases beside the “hoped for” microsecond calculations:

Multiple timestamp tagging options – Courtesy Teledyne DALSA

So one might prefer to tag the images with and external trigger or a shaft encoder, which are physically coupled to the system’s movements, and hence more precise than calculated assumptions.


Courtesy Teledyne DALSA

Sounds promising, how to learn more about T2IR?

Pursue any/all of the following:

Detailed primer: T2IR Whitepaper

Read further in this blog: Additional high-level illustrations and another example below

Speak with 1stVision: Sales engineers happy to speak with you – call 978-474-0044.


Diagnostic tool: a brief overview

Consider the following screenshot, in the context of an Xtium framegrabber together with T2IR:

Diagnostic T2IR window for Xtium framegrabber – Courtesy Teledyne DALSA

The point of the above annotated screenshot is to give a sense of the level of detail provided even at this top level window – with drill ins to inspect an control diverse parameters, to generate reports, etc. This is “just” a teaser blog meant to whet the appetite – read the tutorial, speak with us, or see the manual, for a deeper dive.


T2IR Elements and Benefits – in more detail

Courtesy Teledyne DALSA

Conclusion

Every major machine vision camera and software provider offers an SDK and at least some configuration and control features. But Trigger to Image Reliability – T2IR – takes traceability and control to another level. And for users of so many Teledyne DALSA cameras, framegrabbers, and software, it’s bundled in at no cost!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

245 MP SVS-Vistek SHR811 Super High Resolution Cameras

245 MP SHR811 – Courtesy SVS-Vistek

245 MP cameras in both monochrome and color models

SVS-Vistek, with 35 years experience developing machine vision cameras, releases their first SHR811 Super High Resolution cameras. Additional sensors and cameras will be released in the coming months. The first two SHR models, one each in monochrome and color, are based on the Sony IMX811 CMOS sensor.

Highlights at a glance (Left); LCD pixels (Right) – Courtesy SVS-Vistek

The right-hand image above may look like just a color grid – in fact it’s a segment of a 245 MP image from an LCD inspection application. So imagine a large HDTV or other flat panel showing a test pattern as part of post-production quality acceptance testing. The image is a segment of the inspection image showing just a subset grid of the activated pixels – at such resolution that machine vision algorithms can conclusively give each panel a clear pass or fail determination.

Frame rates to 12 fps might not sound impressive for certain applications, but for a 245 MP sensor, it’s pretty compelling. That’s achieved with the CoaXPress (CSP) interface.

For the SHR811 CCX12 COLOR camera
For the SHR811 MCX12 MONOCHROME camera

Applications and Examples

The series name – SHR – already suggests the applications for which these cameras are intended – Super High Resolution. You may have innovative applications of your own, but classical uses include:

  • Electronics and PCB inspection
  • Display inspection
  • Semiconductor wafer inspection
  • Microscopy
  • High-end surveillance
Applications and examples – Courtesy SVS-Vistek

Additional applications include microscopy and surveillance:

SHR applications –
Courtesy SVS-Vistek

Technical details

Based on the Sony IMX411, this remarkable sensor is the key technology around which SVS-Vistek has designed the SHR811 launch-model camera.

This sensor has 62% higher pixel density than the highly-successful Sony IMX411 sensor, at 2x the framerate, and similar sensor size. So it’s a classic example of Moore’s Law, with size reduction and performance improvements, as Sony builds on its track record of innovation.

1stVision overview of the SHR811MCX12 monochrome camera

Features

As one would expect, there is a comprehensive feature set, including:

SHR feature overview – Courtesy SVS-Vistek

To highlight one specific feature, consider the Sequencer capability. It allows a single trigger to begin a series of timed exposures, as described in the following short video:

Setting the Sequencer – Courtesy SVS-Vistek

For full specifications, go to the SVS-Vistek SHR camera family, drill in to a specific model, and see or download PDF Datasheet, Manual, Technical Drawing, and/or Sensor Spec. Or just call us at 978-474-0044 and let us guide you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.