4 generations of SONY Pregius sensors explained

Newer is better, right? Well yes if by better one wants the very highest performance. More below on that. But the predecessor generations are performant in their own right, and remain cost-effective and appropriate for many applications. We’re often get the question “What’s the difference?” – in this piece we summarize key differences among the 4 generations of SONY Pregius sensors.

In machine vision, sensors matter. Duh. As do lenses. And lighting. It’s all about creating contrast. And reducing noise. Each term linked above takes you to supporting pieces on those respective topics.

This piece is about the four generations of the SONY Pregius sensor. Why feature a particular sensor manufacturer’s products? Yes, there are other fine sensors on the market, and we write about those sometimes too. But SONY Pregius enjoys particularly wide adoption across a range of camera manufacturers. They’ve chosen to embed Pregius sensors in their cameras for a reason. Or a number of reasons really. Read on for details.

Machine Vision cameras continue to reap the benefits of the latest CMOS image sensor technology since Sony announced the discontinuation of CCD’s.  We have been testing and comparing various sensors over the years and frequently recommend Sony Pregius sensors when dynamic range and sensitivity is needed.

If you follow sensor evolution, even passively, you have probably also seen a ton of new image sensor names within the “Generations”.  But most users make a design-in sensor and camera choice, and then live happily with that choice for a few years. As we do when choosing a car, a TV, or a laptop. So unless you are constantly monitoring the sensor release pipeline, its hard to keep track of all of Sony’s part numbers. We will try to give you some insight into the progression of Sony’s Pregius image sensors used in industrial machine vision cameras.

How can I tell if it’s a Sony Pregius sensor?

Sony has prefixes of the image sensors which make it easy to identify the sensor family.  All Sony Pregius sensors have a prefix of “IMX.” Example: IMX174 – which today is one of the best sensors for dynamic range..

1stVision’s camera selector can be filtered by “Resolution” and you can scroll and see the sensors with a prefix of IMX.  CLICK HERE NOW

What are the differences in the “Generations” of Sony Pregius Image sensors?

Sony Pregius Generation 1:

Primarily consisted of a 2.4MP resolution sensor with 5.86um pixels BUT had a well depth (saturation capacity) of 30Ke- and still unique in this regard within the generations.   Sony also brought the new generations to the market with “slow” and “fast” versions of the sensors at two different price points.  In this case, the IMX174 and IMX249 were incorporated into industrial machine vision cameras providing two levels of performance.  Example being Dalsa Nano M1940 (52 fps)  using IMX174 vs Dalsa Nano M1920 (39 fps) using IMX249, but the IMX249 is 40% less in price.

Sony Pregius Generation 2:

Sony’s main goal with Gen 2 was to expand the portfolio of Pregius sensors which consists of VGA to 12 MP image sensors.  However, the pixel size decreased to 3.45um along with well depth to ~ 10Ke-, but noise also decreased!  The smaller pixels allowed smaller format lenses to be used saving overall system cost.   However this became more taxing on lens resolution being able to resolve the 3.45um pixels.  In general it offered a great family of image sensors and in turn an abundance of machine vision industrial cameras at lower cost than CCD’s with better performance.   

1stVision’s camera selector  can be filter by “Resolution” AND pixel size that correspond to one of the generations.  You will have a list of cameras in which you can select those starting with IMX.  I.e  All Generation 2 sensors will be 3.45um, and can narrow to a desired resolution. CLICK HERE NOW

Sony Pregius Generation 3:

For Gen 3, Sony took the best of both the Gen 1 and Gen 2.  The pixel size increased to 4.5um increasing the well depth to 25Ke-!  This generation has fast data rates, excellent dynamic range and low noise.  The family will ranges from from VGA to 7.1MP.  Gen 3 sensors started appearing in our machine vision camera lineup in 2018 and continued to be designed in to cameras for the last few years.

Sony Pregius Generation 4:

The 4th generation is denoted Pregius S, and is designed in to a range of cameras from 5 through 25 Megapixels. Like the prior generations, Pregius S provide global shutter for active pixel CMOS sensors using Sony Semiconductor’s low-noise structure.

New with Pregius S is a back-illuminated structure – this enables smaller sensor size as well as faster frame rates. The benefits of faster frame rates are self-evident. But why is smaller sensor size so important? If two sensors, with the same pixel count, and equivalent sensitivity, are different in size, the smaller one may be able to use a smaller lens – reducing overall system cost.

Surface- vs back-illuminated image sensors – courtesy SONY Semiconductor Solutions Corporation

Pregius S benefits:

With each Pregius S photodiode closer to the micro-lens, a wider incident angle is created. This admits more light. Which enhances sensitivity. At low incident angles, the Pregius S captures up to 4x as much light as Sony’s own highly-praised 2nd generation Pregius from just a few years ago!

With pixels only 2.74um square, one can achieve high resolution even is small cube-size cameras, continuing the evolution of more capacity and performance in less space.

Courtesy Sony Sensors

Fun fact: The “S” in Pregius S is for stacked, the layered architecture of the sensor with the photodiode on top and circuits below, which as note has performance benefits. It’s such an innovation – despite already high-performing Gens 1, 2, and 3, that Sony graced Gen 4 as the Pregius S to really call out the benefits.


While Pregius S sensors are very compelling, the prior generation Pregius sensors remain and excellent choice for many applications. It comes down to performance requirements and cost, to achieve the optimal solution for any given application.

Pregius sensors by generation and sizes – Courtesy Sony Sensors

Many Pregius sensors, including Pregius S, can be found in industrial cameras offered by 1stVision. Use our camera selector to find Pregious sensors, any staring with “IMX”. For Pregius S in particular, supplement that prefix with a “5”, i.e. “IMX5”, to find Pregious S sensor like IMX540, IMX541, …, IMX548.

Contact us

Sony Pregius image sensor Comparison Chart

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

3D IDS Ensenso C (Color) Series

Sometimes just the Z-values are enough, no image needed at all. Some applications require pseudo-images generated from a point cloud – whether in monochrome or with color tones mapped to Z values. Yet other applications require – or benefit from – 3D digital point cloud data as well as color rendering. IDS Ensenso’s C Series provides stereo 3D imaging with precise metrics as well as true color rendering.

Two models of Ensenso stereo cameras – Images courtesy of IDS

If you want an overview of 3D machine vision techniques, download our Tech Brief. It surveys laser triangulation, structured light, Time of Flight (ToF), and stereo vision. If you know you want stereo vision, you might like an overview of all IDS Ensenso 3D offerings.

But if you know you want stereo 3D accuracy to 0.1mm, with color rendering, let’s dive in to the IDS Ensenso C Series. If you prefer to speak with us instead of reading further, just call us at 978-474-0044, or request that we follow up via our contact form.

Key differentiator is “projected texture”

In the short video below, we see 3 scene pairs. For each pair, the leftmost images are the unenhanced 3D image. The rightmost images take advantage of the projected texture created by the LED projector and the RGB sensor, augmenting the 3D point cloud with color information. It can be a differentiator for certain applications.

Video courtesy of IDS

Application areas

Let’s start with candidate application areas, from customer perspective, before pointing out specific features. In particular let’s look at application areas including:

  • Detect and recognize
  • Bin picking
  • De-palletizing
  • Test and measure

Detect and recognize

The ability to accurately detect moving objects to select, sort, verify, steer, or count can enhance (or create new) applications. Ensenso C’s high-luminance projector enables high pattern contrast for single-shot images. Video courtesy of IDS.

Bin picking

Regardless of a robot’s gripping sensitivity, speed, and range of motion, 3D imaging accuracy is central to success. Ensenso C’s integrated RGB sensor can make all the difference for color-dependent applications. Video courtesy of IDS.


De-palletizing might seem like a straightforward operation, but must detect object size, rotation and position even with different and densely stacked goods. Ensenso C supports all those requirements – even from a distance. Video courtesy of IDS.

Test and measure

Automated inspection and measurement of large-volume objects are key for many quality control applications. Precision to the millimeter range can be achieved with Ensenso C at working distances even to 5m. Video courtesy of IDS.

IDS Ensenso C Series

With two models to choose from, Ensenso C supports a range of working distances and focal distances – see specifications.

Both models utilize GigE Vision interface; both embed a 200W LED projector; both use C-mount lenses; both provide IP 65/67 protection. And both models are easy to configure with the Ensenso SDK: Windows or Linux; sample programs including source code; live composition of 3D point clouds from multiple viewing angles; robot eye-hand calibration; and more.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Test your parts in 3D lab

Have you wondered if 3D laser profiling would work for your application? Unless you have experience in 3D imaging, for which laser profiling is one of several popular methods, you may be uncertain of the fit for your application. Yes, one can read a comprehensive Tech Briefs on 3D methods, or product specifications, but wouldn’t it be helpful to see some images of your parts taken with an actual 3D Laser Profiler?

Image courtesy Teledyne DALSA.

While prototyping at your facility is of course one option, if your target objects can be shipped, Teledyne DALSA has a Z-Trak Application Lab, whose services we may be able to arrange at no cost to you. Just describe your application requirements to us, and if 3D laser profiling sounds promising, the service works as follows:

  1. Send in representative samples (e.g. good part, bad part)
  2. We’ll configure Z-Trak Application Lab relative to sample size, shape, and applications goals, and run the samples to obtain images and data
  3. We’ll send you data, images, and reports
  4. Together we’ll interpret the results and you can decide if laser profiling is something you want to pursue

Really, just send samples in? Anything goes? Well not anything. It can’t be 50 meters long. Maybe a 15 centimeter subset would be good enough for proof of concept? And if the sample is a foodstuff, it can’t suffer overnight spoilage before it arrives.

A phone conversation that discusses the objects to be inspected, their dimensions, and the applications goal(s) is all we need to qualify accepting your samples for a test. Image courtesy of Teledyne DALSA.

Case study

In this segment, we feature outtakes from a recent use of the Z-Trak Application Lab, for a customer who needs to do weld seam inspections. The objective is to image a metal part with two weld seams using a Z-Trak 3D Laser Profiler and produce 3D images for evaluation of application feasibility. The images and texts shown here are taken from an actual report prepared for a prospective customer, to give you an understanding of the service.


  • Z-Trak LP1-1040-B2
  • Movable X,Y stage
    X-Resolution: ~25 um
    Y-Resolution: 40 um
    WD: ~50 mm

Image courtesy Teledyne DALSA

The metal part was laid flat on the X,Y stage under the Z-Trak. The stage was moved
to scan the part.

To the right, see the image generated from a perpendicular scan of the metal part. Image courtesy Teledyne DALSA.

The composite image below requires some explanation. The graphs on the middle column, from top to bottom, show Left-Weld-Length, Right-Weld-Length, and Weld-Midpoint-Width (between the left and right welds), respectively. The green markup arrows help you correlate the measurements to the image on the left. The rightmost column includes summary measurements such as Min, Max, and Mean values.

Image courtesy Teledyne DALSA

Now have a look at a similar screenshot, for Sample #2, which includes a “bad weld”:

Image courtesy Teledyne DALSA

With reference to the image above, the customer report included the following passage:

The top-right image is the left weld seam profile. In the Reporter window the measurement of this seam is 1694.79 mm long. However, a defect can be noted at the bottom of the left weld. In addition to the defect it can be seen from the profile that the weld is not straight in the Z-direction. The weld is closer to the surface at the top and further from the surface at the bottom

Translation: The automated inspection reveals the defective weld! Naturally one would have to dig in further regarding definitions of “good weld”, “bad weld”, tolerances, where to set thresholds to balance yields and quality standards vs. too many false positives, etc.


The report provided to the customer concluded that “This application is feasible using a Z-Trak 3D Laser Profiler.” While it’s likely that outcome will be achieved if we qualify your samples and application to use the Z-Trak Application Lab service, it’s not a foregone conclusion. We at 1stVision and our partner Teledyne DALSA are in the business of helping customers succeed, so we’re not going to raise false hopes of application success.


To summarize, the segments above are representative outtakes from an actual report prepared by the Z-Trak Application Lab. The full report contains more images, data, and analysis. Our goal here is to give you a taste for the complimentary service, to help you consider whether it might be helpful for your own application planning process.

Next steps?

To learn more, see a recent blog “Which Z-Trak 3D camera is best for my application?“. Or have a look at the Z-Trak product overview.

If you’d like to send in your parts, please use this “Contact Us” link or the one below. In the ‘Tell us about your project’ field, just write something like “I’d like to have parts sent to the Z-trak lab.” If you want to write additional details, that’s cool – but not required. We’ll call to discuss details at your convenience.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Ensenso – 1stVision expands 3D portfolio with stereo vision

IDS Ensenso 3D cameras
Ensenso 3D Cameras – Courtesy of IDS

Most industries go through waves of technology and product innovation as they mature. In powered flight we had propellers long before jets, though each still has its place. In machine vision, 1D and 2D imaging took several decades to mature before 3D moved from experimentation and early innovation to mature products affordable to many. Download our Tech BriefWhich 3D imaging technique is best for my application?“, if you haven’t yet committed to a particular approach.

Stereo vision is one of the fastest growing approaches to 3D imaging, thanks to Moore’s Law and ever more powerful and compact cameras, processing power, together with modularized and turnkey products. 1st Vision is pleased to represent IDS Imaging’s Ensenso series of 3D cameras. In addition to the downloadable Tech Brief linked above, we encourage you to read on for an overview of all four Ensenso 3D camera families, the S, N, C, and X Series, respectively. If you prefer we guide you directly to a best-fit for your application, just give us a call at 978-474-0044.

Before we get to several different stereo vision series, and their respective capabilities, we note that IDS’ Ensenso S Series in fact utilizes the structured light approach rather than stereo vision. Per the Tech Brief linked above, there are several ways to do 3D.

S Series

Ensenso S Series are compact 3D industrial cameras combining AI software with 3D infrared laser point triangulation, generating point clouds to Z dimension accuracy of 2.4 mm at 1 meter distance. They are a cost-effective solution for many budget-conscious and high volume 3D applications. Each is in a zinc housing with IP65/67 protection.

3D imaging via structured light – Courtesy of IDS

Back to stereo vision, IDS Ensenso  N, C, X and XR 3D Series are based on the stereo vision principle.

The Stereo Vision principle – Courtesy of IDS

N Series

Ensenso N Series 3D cameras are designed for harsh industrial environments and pre-calibrated for easy setup.  N Series 3D cameras are “TM Plug & Play” certified by Techman Robot, and suitable for many 3D applications such as robotics and factory automation.

The Ensenso N Series 3D camera works for either static or moving objects even in changing or low light conditions.  With IP65/67 protection, and a compact design, the Ensenso N Series 3D cameras fit into tight spaces or in moving components such as robotic arms. There are two variants:

  • N3X: aluminum housing for optimal heat dissipation in extreme environments
  • N4X: cost-effective plastic composite housing

C Series

The Ensenso C Series 3D camera, also uses stereo vision, but additionally embeds a color CMOS RGB sensor, pre-calibrated and aligned with the stereo vision system. This allows a “colorized” effect as shown in the video clip below, where one sees 3 adjacent image pairs. Each “right image” is the colorized augmentation on top of the initial stereo point cloud view to its left. Most would agree it lends a more realistic look.

Color sensor lends more realistic look to point cloud – Courtesy IDS

The C Series delivers Z accuracy 0.1 mm at 1 meter distance, with the C-57S, or 0.2mm at 2 meters, with C-57M.

Ensenso C Series – small or medium option – Courtesy of IDS

X Series

Ensenso X Series 3D camera is an ultra-flexible, modular, 3D GigE industrial camera system. The X Series 3D camera systems are available with a choice of two variants: X30 and X36.

Ensenso X Series – Courtesy IDS

The Ensenso X30 3D camera system is designed to capture moving objects making it suitable for many industrial applications such as factory automation production lines, and bin picking.

For static objects, use the Ensenso X36 3D camera system. FlexView2 greatly increases the resolution producing 3D images with precise detail and definition of the objects being captured even with low light or reflective surfaces.

The Ensenso X 3D camera system includes a 100 watt LED projector with an integrated GigE power switch. The 3D camera system can be configured with many GigE uEye cameras and a 1.6 or 5 megapixel CMOS monochrome sensor to create your customized 3D imaging system.

Working distances may be up to 5m, and point cloud models may be developed for objects up to 8 cubic meters in volume!

All of the above cameras include the Ensenso SDK software that accelerates the application set up, configuration and development time. Ensenso 3D cameras are ideal for numerous industrial 3D applications including robotics, logistics, factory automation, sorting, and quality assurance.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!