Sony Announces a New Type 1/2.6 22.5 Megapixel Exmor RS
The FINANCIAL — Sony Corporation is announcing the commercialization of a new Exmor RS image sensor for smartphones and other devices that require increasingly better cameras and thinner form factors.
The IMX318 is a type 1/2.6 stacked CMOS image sensor with 22.5 effective megapixels, and it boasts a more compact size, greater image quality, and higher performance. This new Exmor RS is the industry’s first to be equipped with built-in high-speed hybrid autofocus, as fast as 0.03 seconds, and built-in 3-axis electronic image stabilization technology for video. Sony aims to begin shipments for this new CMOS image sensor in May, 2016.
The IMX318 boasts a stacked structure and the industry’s smallest unit pixels, which measure 1.0μm (micrometers) in size. With this, the image sensor achieves a compact, type 1/2.6 size suitable for inclusion into smartphones, while still realizing a high resolution of 22.5 effective megapixels, among the top in the industry. This new image sensor not only boasts enhanced resolution, but it also matches its predecessor in image quality despite the IMX318’s smaller size and unit pixels (IMX230 predecessor model: type 1/2.4 sensor with 1.12μm unit pixel size). Additionally, Sony has succeeded in equipping the IMX318 with cutting-edge digital imaging features, namely high speed built-in hybrid AF and 3-axis electronic image stabilization for video, a first for the industry. With this image sensor, the user can capture image stills of those decisive moments reliably in high quality, as well as high resolution video without fear of blur, according to Sony.
Adopting Cloud Tech: A Vital Move Towards Fostering Trust and Improving Data Exchange Accessibility
Shipments of Smart Home Devices Fell in 2022, But a Return to Growth is Expected in 2023, According to IDC
The industry’s smallest unit pixel size at 1.0μm, achieving compact size (type 1/2.6) while realizing 22.5 effective megapixels high resolution
As smartphones grow ever thinner, so too are image sensors growing increasingly more compact. In line with this trend, Sony has developed a miniscule 1.0μm pixel sensor that, despite its small size, realizes high image quality. To accomplish this, Sony employed manufacturing technology that improves light utilization efficiency, as well as circuit design technology that eliminates noise, a root cause of deterioration in image quality. With this innovation, Sony has realized an image sensor that delivers high quality images, while also boasting small optical size and a high resolution of 22.5 megapixels. Night shots have long been a weak point for the compact cameras used in smartphones, with the lack of light translating into excessive visual noise. The IMX318 addresses this weakness and realizes beautiful photography in nighttime conditions.
The industry’s first image sensor with built-in hybrid AF, as fast as 0.03 seconds, and 3-axis electronic image stabilization
Hybrid AF, which merges image plane phase detection AF with contrast detection AF, has previously been realized through the combination of an image sensor and an application processor. But with the IMX318, Sony has created the industry’s first1 stacked CMOS image sensor with hybrid AF built into the sensor’s internal signal processor. The IMX318 leverages Sony’s high-speed AF technology that has been honed over many years, boasting an AF as fast as 0.03 seconds (and as fast as 0.017 seconds when shooting video at 60fps). With this power in hand, the user can capture those decisive moments reliably in FOCUS, whether shooting stills or video.
3-axis electronic image stabilization for video
With the IMX318, Sony has also created the industry’s first stacked CMOS image sensor with image stabilization functionality built into its internal signal processor. This sensor leverages the image stabilization technology that Sony has cultivated over its years developing cameras, realizing smooth 4K videos with little camera shake by making effectively use of the signal output obtained from the external 3-axis (pitch, yaw, and roll) gyro sensor. Sony’s unique image stabilization technology incorporated into the IMX318 corrects not only camera shake, but also lens distortion, making for more beautiful videos. Furthermore, because image stabilization is achieved by processing within the hardware of the image sensor, less power is used than when it is accomplished by software processing in the external application processor. Since it enables smooth video shooting, this image sensor is suited for incorporation into not only smartphones, but also a variety of other products that tend to generate substantial camera shake, such as aerial drones used for image capture.
4K and high frame rate video recording, through high-speed communications leveraging the latest MIPI specifications
Sony elected to adopt the MIPI (Mobile Industry Processor Interface) Alliance’s latest C-PHY 1.0/D-PHY 1.2 specifications for this image sensor’s interface. With this interface, the IMX318 is able to achieve more power efficient and faster data transmission from the image sensor to the application processor. As a result, even at the high resolution of 22.5 megapixels, transmission is realized for all pixels at 30fps. This enables seamless switching between high resolution video and still image photography, realizing the best of both worlds. Since the image sensor can transmit greater-than-4K resolution images for all pixels to the application processor, the user can capture 22.5 megapixel still images even while in the middle of recording video in 4K at 30fps.
Sony IMX378: Comprehensive Breakdown of the Google Pixel’s Sensor and its Features
We reached out to Sony to try to learn a bit more about the IMX378 sensor that is used by the upcoming Google Pixel and Pixel XL phones. Learn all about it!
Readers like you help support XDA Developers. When you make a purchase using links on our site, we may earn an affiliate commission. Read
IMX378 Overview
We reached out to Sony to try to learn a bit more about the IMX378 sensor that is used by the upcoming Google Pixel and Pixel XL phones, as well as by the Xiaomi Mi 5S. Unfortunately, Sony was not able to distribute the datasheet for the Exmor RS IMX378 sensor just yet, but they were extremely helpful, and were able to provide us with some previously unreleased information about the IMX378.
First up, the very name itself was wrong. Despite rumors stating that it would be part of the Exmor R line of Backside Illuminated (BSI) CMOS sensors like the IMX377 before it that was used in the Nexus 5X and Nexus 6P. our contact at Sony has informed us that the IMX378 will instead be considered part of Sony’s Exmor RS line of Stacked BSI CMOS sensors.
While many things have remained the same from the IMX377 to the IMX378, including the pixel size (1.55 μm) and sensor size (7.81 mm), there have been a couple key features added. Namely it is now a stacked BSI CMOS design, it has PDAF, it adds Sony’s SME-HDR technology, and it has better support for high frame rate (slow motion) video.
Stacked BSI CMOS
Backside illumination by itself is an extremely useful feature that has become almost standard in flagship smartphones for the last few years, starting with the HTC Evo 4G in 2010. It allows the camera to capture substantially more light (at the cost of more noise) by moving some of the structure that traditionally sat in front of the photodiode on front illuminated sensors, behind it.
Surprisingly, unlike most camera technology, backside illumination originally started appearing in phones before DSLRs, thanks in large part due to the difficulties with creating larger BSI sensors. The first BSI APS-C sensor was the Samsung S5KVB2 that was found in their NX1 camera from 2014, and the first full-frame sensor was the Sony Exmor R IMX251 that was found in the Sony α7R II from last year.
Stacked BSI CMOS technology takes this one step further by moving more of the circuitry from the front layer onto the supporting substrate behind the photodiodes. This not only allows Sony to substantially reduce the size of the image sensor (allowing for larger sensors in the same footprint), but also allows Sony to print the pixels and circuits separately (even on different manufacturing processes), reducing the risk of defects, improving yields, and allowing for more specialization between the photodiodes and the supporting circuitry.
PDAF
The IMX378 adds Phase Detection Autofocus, which last year’s Nexus phones and the IMX377 did not support. It allows the camera to effectively use the differences in light intensity between different points on the sensor to identify if the object that the camera is trying to FOCUS on is in front of or behind the FOCUS point, and adjust the sensor accordingly. This is a huge improvement both in terms of speed and accuracy over the traditional contrast-based autofocus that we’ve seen on many cameras in the past. As a result, we’ve seen an absolute explosion of phones using PDAF, and it has become a huge marketing buzzword which is held up as a centerpiece of camera marketing across the industry.
While not quite as quick to FOCUS as the Dual Photodiode PDAF that the Samsung Galaxy S7 has (also known as “Dual Pixel PDAF” and “Duo Pixel Autofocus” ), which allows every single pixel to be used for phase detection by including two photodiodes per pixel, the merger of PDAF and laser autofocus should still be a potent combination.

High Frame Rate
There’s been a lot of talk lately about high frame rate cameras (both for consumer applications, and in professional filmmaking). Being able to shoot at higher frame rates can be used both to create incredibly smooth videos at regular speed (which can be fantastic for sports and other high-speed scenarios) and to create some really interesting videos when you slow everything down.
Unfortunately, it is extremely difficult to shoot video at higher frame rates, and even when your camera sensor can shoot at higher frame rates, it can be difficult for the phone’s image signal processor to keep up. That is why while the IMX377 used in the Nexus 5X and 6P could shoot 720p video at 300 Hz and 1080p video at 120 Hz, we only saw 120 Hz 720p from the Nexus 5X and 240 Hz 720p from the 6P. The IMX377 was also capable of 60 Hz 4k video, despite the Nexus devices being limited to 30 Hz.
The Pixel phones are both able to bring this up to 120 Hz 1080p video and 240 Hz 720p video thanks in part to improvements related to the IMX378, which sees an increase in capabilities of up to 240 Hz at 1080p.

The sensor is also able to shoot full resolution burst shots faster, stepping up to 60 Hz at 10 bit output and 40 Hz at 12 bit output (up from 40 Hz and 35 Hz respectively), which should help reduce the amount of motion blur and camera shake when using HDR.
SME-HDR
Traditionally, HDR for video has been a trade-off. You either had to cut the frame rate in half, or you had to cut the resolution in half. As a result, many OEMs haven’t even bothered with it, with Samsung and Sony being among the few that do implement it. Even the Samsung Galaxy Note 7 is limited to 1080p 30 Hz recording due in part to the heavy computational cost of HDR video.
The first of the two main traditional methods for HDR video, which Red Digital Cinema Camera Company calls HDRx and which Sony calls Digital Overlap HDR (DOL-HDR). works by taking two consecutive images, one exposed darker and one exposed lighter, and merging them together to create a single video frame. While this allows you to keep the full resolution of the camera (and set different shutter speeds for the two separate frames), it can often result in issues due to the time gap between the two frames (especially with fast moving objects). Additionally, it can be very difficult for the processor to keep up, as with DOL-HDR, the phone’s ISP handles merging the separate frames together.
The other traditional method, which Sony calls Binning Multiplexed Exposure HDR (BME-HDR), sets a different exposure setting for every pair of two lines of pixels in the sensor to create two half resolution images at the same time, which are then merged together into one HDR frame for the video. While this method avoids the issues associated with HDRx, namely a reduction in frame rate, it has other issues, specifically the reduction in resolution and the limits on how the exposure can be changed between the two sets of lines.
Spatially Multiplexed Exposure (SME-HDR) is a new method that Sony is using to allow them to shoot HDR at the full resolution and at the full frame rate that the sensor is capable of. It is a variant of Spatially Varying Exposure that uses proprietary algorithms to allow Sony to capture the information from the dark and light pixels, which are arranged in a checkerboard style pattern, and infer the full resolution image for both the dark and light exposure images.
Unfortunately, Sony was not able to give us more detailed explanations about the exact pattern, and they may never be able to disclose it.- companies tend to play their cards very close to their chest when it comes to cutting edge technology, like that which we see in HDR, with even Google having their own proprietary algorithm for HDR photos known as HDR. There is still some publicly-available information that we can use to piece together how it may be accomplished, though. A couple of papers have been published by Shree K. Nayar of Columbia University (one of which was in collaboration with Tomoo Mitsunaga of Sony) that contain different ways to use Spatially Varying Exposure, and different layouts that can achieve it. Below is an example of a layout with four levels of exposure on an RGBG image sensor. This layout claims to be able to achieve single capture full resolution HDR images with only around a 20% loss in spatial resolution, depending on the scenario (the same accomplishment that Sony claims for SME-HDR).
Sony has used SME-HDR in a couple image sensors already, including in the IMX214 that has seen a lot of popularity lately (being used in the Asus Zenfone 3 Laser, the Moto Z. and the Xperia X Performance ), but is a new addition to the IMX378 compared to the IMX377 that was used last year. It allows the camera sensor to output both 10 bit full resolution and 4k video at 60 Hz with SME-HDR active. While a bottleneck elsewhere in the process will result in a lower limit, this is a fantastic improvement over what the IMX377 was capable of, and is a sign of good things to come in the future.
One of the big improvements of the IMX378 over the IMX377 is that it is able to handle more of the image processing on-chip, reducing the workload of the ISP (although the ISP is still able to request the RAW image data, depending on how the OEM decides to use the sensor). It can handle many small things like defect correction and mirroring locally, but more importantly, it can also handle BME-HDR or SME-HDR without having to involve the ISP. That could potentially be a major difference going forwards by freeing up some overhead for the ISP on future phones.
We would like to thank Sony once again for all the help with creating this article. We really appreciate the lengths that Sony went to in helping ensure the accuracy and depth of this feature, especially in allowing us to uncover some previously-unreleased information about the IMX378.
That being said, it really is a shame that it is so hard to access some of this information, even basic product information. When companies try to put information on their websites, it often can be rather inaccessible and incomplete, in large part because it is often treated as a secondary concern of the company’s employees, who are more focused on their main work. One dedicated person handling public relations can make a huge difference in terms of making this type of information available and accessible to the general public, and we’re seeing some people trying to do just that in their free time. Even on the Sony Exmor Wikipedia article itself, where over the course of a couple months a single person in their spare time laid most of the foundation to take it from a nearly useless 1,715 byte article that had been mostly the same for years, into the ~50,000 byte article which we see there today with 185 distinct editors. An article that is arguably the best repository of information about the Sony Exmor sensor line available online, and we can see a very similar pattern on other articles. A single dedicated writer can make a substantial difference in how easily customers can compare different products, and in how educated interested consumers are about the subject, which can have far-reaching effects. But that’s a topic for another time.
As always, we’re left wondering how these hardware changes will affect the devices themselves. We quite clearly will not be getting 4k 60 Hz HDR video (and may not be getting HDR video at all, as Google has not mentioned it yet), but the faster full resolution shooting likely will help substantially with HDR, and we will see the improvements of the newer sensor trickle into the phone in other similar small but substantial ways as well.
While DXOMark lists the Pixel phones as performing slightly better than the Samsung Galaxy S7 and HTC 10, many of the things that gave the Pixel phones that small lead were major software improvements like HDR (which produces absolutely fantastic results, and which DXOMark dedicated an entire section of their review to) and Google’s special EIS system (which can work in tandem with OIS) that samples the gyroscope 200 times a second to provide some of the best Electronic Image Stabilization we have ever seen. Yes, the Pixel phones have a great camera, but could they have been even better with OIS and Dual Pixel PDAF added in? Absolutely.
Don’t get me wrong, as I said, the Pixel phones have an absolutely stunning camera, but you can’t really blame me for wanting more, especially when the path to those improvements is so clear (and when the phones are priced at full flagship pricing, where you expect the best of the best). There’s always going to be a part of me that wants more, that wants better battery life, faster processors, better battery life, brighter and more vivid screens, louder speakers, better cameras, more storage, better battery life, and most importantly, better battery life (again). That being said, the Pixel phones have many small fantastic features that could come together to create a truly promising device, which I am excited to see.
Sony IMX989, a 1-inch type image camera sensor for smartphones
Xiaomi’s new 12S Ultra smartphone, which the company will reveal on July 4, features a Sony IMX989, a 1-inch type image camera sensor that may well change the future of smartphone imaging.
We’ve seen imaging companies, in recent months, promise that the future of photography is just round the corner. Xiaomi and Leica Camera announced, last May, that a new era of mobile imaging is coming, because of their cooperation, of which we may see some results in the upcoming Xiaomi smartphones. In June, Leica and Panasonic announced a partnership that will bring a new imaging world… probably to conventional cameras. This week Samsung confirmed its second 200MP camera sensor for smartphones, named ISOCELL HP3, stating that it offers ““Epic Resolution Beyond Pro”.
Now, making the gap between cameras and smartphones smaller, Xiaomi reveals that the sensor used in its Xiaomi 12S Ultra smartphone is Sony’s IMX989, a 1-inch type image camera sensor designed for mobile devices that may well change the future of smartphone imaging. The company has already declared that, starting July 4, “It’s a new era for mobile photography!”

Sony has not yet officially revealed its new sensor, but this is something that was expected, as the company has, previously, used a 20.1MP 1″ Exmor RS BSI CMOS from the Sony RX100 camera in the Xperia PRO-I. The sensor was optimized for the smartphone and, as we wrote here before, only offers a 12MP coverage (the usual size of smartphone photos), using the center area of the sensor. Now, if you’re a bit confused about the size of a 1-inch type sensor – sometimes also referred as 1 inch sensor – here are the dimensions of this sensor: about 8.8 x 13.2mm.
1-inch type sensor is not one inch
It’s clear, now that you know the dimensions, that a 1-inch type sensor is much smaller than a Micro Four Thirds sensor – which has 13 x 17.3mm – an APS-C sensor – at 14.8 x 22.2mm (Canon) – or the reference 35mm represents, with its 24 x 36 mm. Still, it’s bigger than the camera sensor most smartphones use, so it’s a step forward that it’s being used. If you’re really curious about “The hypothetical TRUTH about the 1 inch sensor”, the video by Philip Bloom, which will take some 30 minutes to watch, will guide you through the history of it all.

Although some say this new sensor – which will be only used for the main camera – has 50MP, others point to 100MP, but we all will have to wait until July 4 to see what the Xiaomi 12S Ultra smartphone uses. With Samsung already proposing two different 200MP sensors, some believe more pixels “is the way to go”, although many others believe 50MP is ideal for smartphones, as optical systems continue to be a weak part of the equation.
We’ve already shared here at PVC, before, another aspect that we believe is crucial for smartphones designed with photographers and videographers in mind: the use of sensors with similar values – like the 50MP triple array used in the Xiaomi 12 Pro, or the three 64MP cameras in the ZTE Axon 40 Ultra smartphone, which point to a tendency that is most welcome by those wanting to use smartphones as cameras. It is expected that Xiaomi follows the same logic with the upcoming 12S Ultra smartphone.
No crop, says Xiaomi
One question remains, though, regarding the Xiaomi 12S Ultra: how much of the 1-inch type sensor is being used? Remember that Sony only used a section of the 1-inch type sensor applied to the Xperia PRO-I to create 12MP final photos. The Leitz Phone 1 and the Sharp Aquos R6 also feature a 1-inch type sensor but still only use a section of its area. Has Xiaomi, working with Leica, found a way to use the whole sensor, and if so, what’s the size of the lens?
The images now made available by Xiaomi don’t tell the whole story, so we will have to wait a few more days, but Xiaomi claims that the custom Sony IMX989 sensor is used with no crop… adding that (and this is machine translated from Chinese) “Compared to iPhone 13 Pro Max, the light-sensitive area is increased by 172% and the light-sensitive capability is increased by 76%, while the photo speed is increased by 32.5% and the boot speed is increased by 11%. Sony IMX989LeicaXiaomi Imaging Brain, the pearls complement each other”.
The truth is that Sony is busy developing different sensors, and it’s only natural that developments in one area end being applied in other segments. The company recently announced its new image sensor with Pregius Technology, the 127Mpixel IMX661, part of a group of CMOS sensor global shutters which is suitable for high-precision inspection of moving objects, industrial imaging and sensing contexts, such as traffic monitoring and infrastructure inspection.
IMX989 development cost 15 million
Regarding its large CMOS image sensor family, Sony says that “there are many products to choose from, with either a global shutter or a rolling shutter” and notes that its application extends beyond manufacturing, into wide-area monitoring, aerial photography, agriculture, infrastructure inspection, and other areas that require high resolution imaging.
Sony Semiconductor Solutions Group, which integrates image sensors, believes that “the growth in this area is estimated mainly driven by multiple lens and larger sensors. The sensing capabilities are also expected to become a significant contributor in a medium to long-term perspective. In AV applications, the overall market share is decreasing, but the highly value-added segment is growing” but the company that actually has the world’s top share (on the revenue basis) at 49% of the entire market, wants to “stay focused on image sensors, pursuing further advances in our leading imaging and sensing technology and ensuring growth in each market segment.”
The Sony IMX989 that will be inside the Xiaomi 12S Ultra smartphone may well be a sign of new things coming from Sony in the near future, as new models from its own Xperia mobile devices are expected. What we know for now, courtesy of Xiaomi’s page on Weibo (and a quick machine translation of the Chinese original) is that the “IMX989 development cost 15 million, with Xiaomi and Sony each bearing half of the cost”, and that after the debut of the sensor on the Xiaomi 12s Ultra, the Sony IMX989 will be available for use “by our domestic counterparts to jointly promote the advancement of mobile imaging.”
Sony Starvis, Sony Exmor, Exmor R and Exmor RS – An Overview
The end of the era of CCD sensors and cameras has arrived because of Sony’s commitment to CMOS technology as the most significant sensor manufacturer worldwide. As a result of Sony’s investment in the creation of high-quality CMOS sensors, their cameras now have faster shutter speeds, greater low-light performance, and more accurate color reproduction. Sony is a dominant force in the imaging sector as a result of its commitment to innovation and technological improvement. Due to its emphasis on CMOS technology, Sony cameras are now well-known for their cutting-edge features and high-quality imagery.
As a result, Sony’s Starvis and Exmor sensors, which enable users to take incredibly detailed shots even in low-light situations, are a tribute to the company’s dedication to innovation and perfection. over, Sony has been able to constantly innovate and enhance its camera solutions due to its dedication to CMOS technology.
In this post, we will explore them using differentiation that identifies their structures and uses cases. However, Sony STARVIS vs Exmor indicates that the differentiation of STARVIS and Exmor types such as Exmor R and Exmor RS.
What is Sony STARVIS Sensor?
Sony’s STARVIS is a back-illuminated pixel technology used in CMOS image sensors for surveillance camera applications. This technology allows for clear and improved image quality even in low-light conditions, making it ideal for security purposes. This technology has become a popular choice for both indoor and outdoor cameras due to its ability to capture accurate images with minimal noise.
STARVIS increases the sensitivity of back-illuminated CMOS image sensors for security cameras. It achieves outstanding picture quality in the visible light and near-infrared light ranges and has a sensitivity of 2000 mV or more per 1μm² (color product, while imaging with a 706cd/m² light source, F5.6 in 1s accumulation equivalent). The increased sensitivity also enables surveillance cameras to capture important details and information for investigations or security purposes. Additionally, STARVIS technology helps reduce motion blur and distortion for sharper and more accurate footage.
What is Sony EXMOR Sensor?
Sony’s Exmor sensor is among the most advanced camera technology currently available. With their cutting-edge technology and sophisticated features including improved light sensitivity, greater image quality, and Rapid image processing, Sony Exmor sensors have transformed the camera business. It is used to give users outstanding imaging capabilities in a wide range of Sony cameras and smartphones.
Exmor technology continued to advance with successive generations being introduced throughout time, but it was the Exmor R series (Exmor’s fifth generation), which considerably increased sensitivity, that sparked a revolution in sensor technology. A switch from FSI (Front-Side Illuminated) to BSI (Back-Side Illuminated) technology was indicative of this performance adjustment. A BSI sensor typically has a sensitivity level that is about twice that of a typical front-illuminated image sensor.
The Exmor R technology is distinguished by a back-illuminated pixel architecture, which relocates the readout circuitry from between each pixel’s microlens and photodiode to adjacent to the photodiode layer. As a result, each pixel’s light-sensitive photodiode receives a direct channel from the light that enters each pixel. As a result, a higher proportion of photons hitting each pixel are converted into charge, leading to increased quantum efficiency.
CMOS image sensor known as “Exmor RS” uses a distinctive “stacked structure.” Instead of using the traditional supporting substrates used for back-illuminated CMOS image sensors, this structure layers the pixel section, including formations of back-illuminated pixels, over the chip attached to installed circuits for signal processing.
Different structures of Sony STARVIS, Exmor, Exmor R, and Exmor RS
The architectural difference between Exmor and Exmor R sensors is that the former has an FSI structure, while the latter is built based on the BSI architecture. The BSI design in Exmor R sensors results in improved low-light performance, while the FSI structure in Exmor sensors offers better color reproduction and dynamic range. However, the gap in performance between the two technologies has been narrowing in recent years. Since there are no obstacles in the way of the light, it falls directly on the photodiode and light-receiving surface, giving BSI sensors a better sensitivity. Furthermore, this avoids the image data loss that would likely result from light falling on the sensor at an angle.
According to information on Sony’s website, Exmor sensors are layered in the correct order and have a front-illuminated structure:
- On-chip microlens
- Color filters
- Metal wiring
- Light receiving surface
- Photodiodes
Exmor R sensors employ the same layers in a back-illuminated structure, however with a different pattern:
- On-chip microlens
- Color filters
- Light receiving surface
- Photodiodes
- Metal wiring
The wire and photodiode placements are changed in Exmor R sensors. The wiring layer is eliminated as source of light occlusion by placing the photodiodes initially.
Similar in structure to Exmor R sensors, Sony STARVIS sensors provide superior image quality in extremely low light because of their enhanced NIR sensitivity.
Stacking sensor architecture is a feature of the Exmor RS family. This architecture allows for a larger sensor size within a compact form factor, resulting in improved image quality and low-light performance. Additionally, the Exmor RS family also includes features like phase detection autofocus and HDR video capabilities.
Uses for Sony Exmor and STARVIS sensors in modern embedded vision
The possibilities for a set of sensors are virtually limitless because image sensors are employed in such a wide range of products as machine vision systems, mobile phones, embedded vision devices, etc. The employment of Sony Exmor and STARVIS sensors is widespread, even within the confines of embedded vision. Considering this, let’s take a closer look at some cutting-edge and creative embedded vision applications that make the most use of these sensors.
Effective surveillance
Smart security cameras can count individuals, analyze crowds, count vehicles, and other operations. They frequently must work in low light or at night, in which case excellent sensitivity is essential. Sony Exmor and STARVIS sensors can be useful in this situation.
Intelligent traffic systems
Smart traffic cameras are a kind of Smart surveillance that use cameras to count vehicles, automatically read license plates, identify passengers based on their faces, and other tasks. The best Sony Exmor or STARVIS sensor may be selected based on the specific use case.
Broadcasting of sports automatically
Exmor and STARVIS-based cameras have the high sensitivity and SNR (Signal to Noise Ratio) requirements of some automated sports broadcasting cameras.
Wrapping Up
Recent advancements in sensors indicate a trend toward more varied functionalities in new models. Sony improved the low-light sensitivity and wide dynamic range of their sensors with the Exmor and Starvis technologies, making them perfect for use in security systems and surveillance cameras. They also enable faster processing times and better image quality in smartphones and digital cameras due to their stacked CMOS sensors.
This article provides a full explanation of how these sensors differ in order to eliminate any confusion. In general, the Starvis sensor improves image quality in low-light conditions, making it easier to capture stunning images in low lighting. In the meantime, the Exmor sensor supports low noise performance, faster and more precise autofocus, and is particularly helpful when capturing moving subjects.
Vadzo Camera Portfolio has cameras based on Sony Exmor as well as Sony Starvis sensors. Please have a look at our
If you have any concerns or need assistance using Sony products for your desired projects, feel free to Contact Us