Below are some updates from Coherent, a leading provider of test and measurement instruments for optical communications technology, including solutions designed for the Super-C Band.
For further information, feel free to reach out to the Starvoy team to arrange a meeting at our Kanata Test & Measurement Lab, one of our other offices, or whilst we are visiting OFC (6th – 9th March 2023, San Diego).
Introducing WaveShaper® 1000B and 4000B covering the Super C-Band
Coherent introduces the WaveShaper 1000B and 4000B covering the Super C-Band. The 1000B has a 1×1 and the 4000B a 1×4 port configuration. Both units support arbitrary spectral filter shapes of attenuation and phase across the entire operating range from 1523.142 nm to 1573.301 nm. A minimum filter bandwidth of 10 GHz (FWHM) is available. When selecting the “High Resolution” mode – which applies a double pass configuration inside the instrument – the minimum bandwidth reduces to 8 GHz.
Original post from: https://www.qorvo.com/design-hub/blog/what-designers-need-to-know-to-achieve-wi-fi-tri-band-gigabit-speeds-and-high-throughput
Engineers are always looking for the simplest solution to complex system design challenges. Look no further for answers in the U-NII 1-8, 5 and 6 GHz realm. Here we review how state-of-the-art bandBoost™ filters help increase system design capacity and throughput, offering engineers an easy, flexible solution to their complex designs, while at the same time helping to meet those tough final product compliance requirements.
With this increase in usage comes an increase in expectations to access Wi-Fi anywhere — throughout the home, both inside and out, and at work. Meeting these expectations requires more wireless backhaul equipment to transport data between the internet and subnetworks. It also requires advancements in existing technology to reach the capacity, range, signal reliability and the rising number of new applications wireless service providers are seeing. Figure 1 shows the exponential increase in wireless applications — from email to videoconferencing, smart home capabilities, gaming and virtual reality — as wireless technology continues to advance.
The 802.11 standard has now advanced onto Wi-Fi 6 and Wi-Fi 6E, providing service beyond 5 GHz and into the 6 GHz area up to 7125 GHz, as shown in Figure 2. This higher frequency range increases our video capacities for our security systems and streaming.
Figure 2: Tri-Band Wi-Fi frequency bands
However, working in higher frequency ranges can bring challenges such as more signal attenuation and thermal increases — especially when trying to meet the requirements of small form factors. To meet these challenges head-on, RF front-end (RFFE) engineers need to take existing technology to another level. One of those advancements has been in BAW filter technology now being used heavily in Wi-Fi system designs.
As shown in Figure 3 below, Qorvo has three BAW filter variants that boost overall Wi-Fi performance, maximize network capacity, increase RF range, and mitigate interference between the many different in-home radios operating simultaneously.
Figure 3: bandBoost, edgeBoost, and coexBoost filter technology performance
5 & 6 GHz bandBoost Filters
In a previous blog post called An Essential Part of The Wi-Fi Tri-Band System – 5.2 GHz RF Filters, we explored how using bandBoost filters like the Qorvo QPQ1903 and QPQ1904 can help reduce design complexity and help with coexistence. We also explored how these bandBoost filters provide high isolation, helping to reduce that function on the antenna design, allowing for less expensive antennas. Therefore, the RFFE isolation parameter no longer needs to rest entirely on the antenna. This reduces antenna and shielding costs – providing up to a 20 percent cost reduction.
These bandBoost BAW filters play a key role in separating the U-NII-2A band from the U-NII-2C band, which only has a bandgap of 120 MHz, as shown in Figure 4. Using these filters, we can attain Wi-Fi coverage reaching every corner of the home with the highest throughput and capacity. Using this solution in a Wi-Fi system design has shown increases in capacity for the end user up to 4-times.
Unlicensed National Information Infrastructure (U-NII)
The U-NII radio band, as defined by the United States Federal Communications Commission, is part of the radio frequency spectrum used by WLAN devices and by many wireless ISPs.
As of March 2021, U-NII consists of eight ranges. U-NII 1 through 4 are for 5 GHz WLAN (802.11a and newer), and 5 through 8 are for 6 GHz WLAN (802.11ax) use. U-NII 2 is further divided into three subsections: A, B and C.
Figure 4: 5 GHz bandBoost filters and U-NII 1-4 bands
These filters are much smaller than legacy filters on the market used in Wi-Fi applications — allowing for more compact tri-band radios. They also have superior isolation achieving greater than 80 dBm system isolation for designers. This helps engineers meet the stringent Wi-Fi 6 and 6E requirements.
Figure 5: Benefits of using QPQ1903 and QPQ1904 bandBoost filters
The addition of multiple-input multiple-output (MIMO) and higher frequencies in the 6 GHz range increases system temperatures. With more thermal requirements, robust RFFE components are a must. Much of the industry specifies their parts in the 60°C to 80°C range, but higher temperature operation is needed based on the system temperatures produced in this frequency range. To solve these challenges, many hours of design effort have been spent on increasing the temperature capabilities of BAW. As product designs in Wi-Fi 5, 6/6E, and soon to come Wi-Fi 7, development has become more challenging, and as new opportunities like the automotive area opened for BAW, the push for higher temperature capability has come to the forefront.
Qorvo BAW technology engineers have delivered innovative devices by designing those that exceed the usual 85°C maximum temperature working range, going up to +95°C. The benefits this creates are great for both product designers and end-product customers. Now sleeker devices are achievable, as end-products no longer require large heat sinks. This also reduces design time as engineers can more easily attain system thermal requirements. One other advancement related to heat is that the bandBoost BAW products work at +95°C while still meeting a 0.5 to 1 dBm insertion loss.
This lower insertion loss improves Wi-Fi range and receive quality by up to 22 percent. Lower insertion loss also means improved thermal capability and performance as the RF signal seen at the RFFE Low Noise Amplifier (LNA) is improved. Below, Figure 6 shows the features and benefits of the QPQ1903 and QPQ1904 edgeBoost™ BAW filter.
Figure 6: Features and benefits of QPQ1903 and QPQ1904
Not only are these filters providing benefits to the LNA, but they are small and perform well enough to install inside a tiny integrated Wi-Fi module package housing the LNA, switch, PA, and filter. Doing this drastically changes the end-product system layout making design easier and helps speed time-to-market. No longer are engineers burdened with matching and plugging individual passive and active components onto their PC board, but now they have all that done in these complex integrated modules called integrated front-end modules (iFEMs), creating a plug-and-play solution easily installed on their design.
A perfect example of this is the QPF7219 2.4 GHz iFEM, as seen in Figure 7. Qorvo has not only provided solutions with discrete edgeBoost BAW filters to increase output and capacity across all Wi-Fi channels. But Qorvo has gone a step further by including this filter inside an iFEM, our QPF7219, to provide customers with a drop-in pin-compatible replacement providing the same capacity and range performance outcome. This provides customers with design flexibility, board space in their design and is the first one of its kind on the market.
Figure 7: edgeBoost used as discrete and inside an iFEM
The need for smaller and sleeker product designs is always top of mind for Wi-Fi engineers. But to achieve the goal means component designers need to develop smaller products in many areas of the design, not just in one or two areas. From a tri-band Wi-Fi chip-set perspective, Qorvo has addressed this head-on. Qorvo has provided an entire group of iFEM alternatives to address the many signal transmit and receive lines in a product. This allows Wi-Fi design manufacturers to manage all the UNII and 2.4 GHz bands in a tri-band end-product design.
This new design solution of combining the filter inside the iFEM equates to a smaller PC board and less shielding, as shown in Figure 9 below. Shielding matching and PC board space are expensive, not to mention the additional time associated with providing these materials. By placing all the RFFE materials inside a module, system designers can save cost, design faster, and get their products to market more quickly.
Figure 9: Putting the filter technology inside the iFEM removes shielding and reduces overall RFFE form-factor
As Wi-Fi system designers continue to be challenged with new specification requirements, they need newer or enhanced technologies to meet the need. By collaborating with our customers, we have provided state-of-the-art solutions to solve the tough thermal, performance, size, interference, capacity, throughput, and range difficulties seen by their end-customers. These solutions enable them to improve their designs to meet the Wi-Fi wave of today and in the future.
About the Author
Senior Marketing Manager, Wireless Connectivity Business Unit
With over 20 years of experience in the wireless industry, Igor helps Qorvo engineering teams create state-of-the-art RF components and solutions. He inspires the creation of new wireless connectivity products and eco-systems innovations that make a deep impact on our everyday life.
While some feel GaN is still a relatively new technology, many can’t dispute how it’s advanced to the head of the class. AKA, Gallium Nitride, GaN is a technology on the cusp of dethroning silicon LDMOS, which has been the material of choice in high power applications. GaN is a direct bandgap semiconductor technology belonging to the III-V group. It is increasingly being used in power electronics because of its higher efficiency, superior high-voltage sustainability, reduced power consumption, higher temperature attributes, and power-handling characteristics.
These attributes have thrust GaN into the 5G RF spotlight – especially when it comes to mmWave 5G networks. And, while we all have ‘heard’ the promises of 5G, today, many of us in big cities – about 5 million of us to be more precise – are starting to realize those promises as major wireless carriers roll 5G out to their customers. But we are not there yet. Not even close. The goal is to connect 2.8 billion users by 2025. To reach this goal means to revamp the entire mobile infrastructure – a complex undertaking. But it can be done. And with the help of GaN technology, 5G will be in billions of people’s hands before you know it.
Recently, Embedded.com invited Qorvo’s own Roger Hall to pen a series of 5G articles that explain the complexities of building out the infrastructure and where GaN fits into the innovations that will bring 5G to the masses. Here are summaries of each article with a link for a deeper dive.
5G and GaN: Understanding Sub-6 GHz Massive MIMO Infrastructure
In this article, Roger explains the advantages for carriers to implement Massive MIMO technology as a means to minimize cost and increase capacity when rolling out 5G. He explores sub-6 GHz and why it’s important for increasing the adoption and expansion of 5G. He also addresses how GaN is being used in Massive MIMO Infrastructure applications. Read more >
5G and GaN: The Shift from LDMOS to GaN
Here Roger examines how the power demands of sub-6 GHz 5G base stations are driving a shift from silicon LDMOS amplifiers to GaN-based solutions, and what makes GaN a viable technology for many RF applications. Roger also reviews some of the tradeoffs engineers need to consider between these two technologies and why GaN is becoming the clear winner in many 5G solutions. Read more >
5G and GaN: What Embedded Designers Need to Know
Building on the previous article, Roger provides insight for embedded designers to fully realize the potential of GaN. He discusses misconceptions about GaN, explores its characteristics, and offers best practices to maximize its performance. Read more >
5G and GaN: Future Innovations
In his fourth and final article in this series, Roger looks to the future of GaN’s role in base stations. He provides a peek into GaN innovations being made today that will improve linear efficiency, power density and reliability and the implications of those improvements. Read more >
For more information on GaN technology, visit here.
About the Author
About Roger Hall
Roger is the General Manager of High-Performance Solutions at Qorvo. He leads program management and applications engineering for Wireless Infrastructure, Defense and Aerospace, and Power Management markets. This overarching role gives him a unique lens to view and interpret where RF technologies play fundamental parts in enabling future innovations.
Qorvo Blog Team
One part technical, one part content, and one part strategic, our small team is dedicated to connecting you with helpful, timely insights from some of the bright minds at Qorvo.
Original Blog link: https://www.qorvo.com/design-hub/blog/why-gan-is-5g-super-power
The AS8579 sensor offers the simplest way for car makers to comply with the UN Regulation 79, while giving the best detection performance
For automotive design engineers, it is unusual to find a new technology solution which performs better than existing approaches, and which reduces cost, and which is easier to implement in the application. But that is exactly what a new capacitive sensing chip, ams’ new AS8579, offers when used for hands-on detection (HOD) in cars which provide driver assistance functions.
It is the result of the application of a familiar and proven measurement principle – I/Q demodulation – to the job of sensing the position of the driver’s hands on the steering wheel. And it is markedly superior to any of the existing technologies in use for HOD in cars. Watch the highlights in our video:
Essential safety requirement in new car designs
The HOD function is required by the United Nations Regulation 79, and applies to all new cars that have a Lane Keeping Assist System (LKAS) wherever ratified. It has already been adopted by the European Union for new production vehicles from 1 April 2021. The purpose of the HOD system is to continuously monitor the readiness of the driver to assume control of the steering system in an emergency, or in the event of the failure of the LKAS.
Various technologies have been developed to provide this HOD function, but have had limitations: it is possible for drivers who want to avoid holding the steering wheel to fool the current monitoring system, which could compromise safety. And some existing solutions also perform poorly in certain operating conditions.
One approach to HOD has been the torque sensor: this detects the continual, minute deflections produced when the driver grips the steering wheel. The big drawback of this technology is that it can be easily fooled: the driver may take their hands off the wheel and ‘hold’ it by pressing upwards against it with their leg.
The problems with torque sensors have led the car industry to adopt a form of capacitive sensing for HOD: it monitors the driver’s grip on the steering wheel by detecting the change in capacitance of the steering wheel when the driver’s hands – which absorb electrical charge – come into contact with it. This technique only requires a single sensor chip connected to a metal sensor element built into the steering wheel.
Until now, automotive system manufacturers have used the charge-discharge method of capacitive sensing: this is a well understood technique, as it has been applied for many years in products such as touchscreens and touch-sensing buttons. But detection fails when the driver wears gloves, and false detection signals generated by the presence of moisture or humidity undermine the safety performance of hands-on detection based on this method of capacitive sensing. This type of capacitive sensor can even be fooled if the driver wedges a capacitive object, such as a piece of fruit or a plastic water bottle, into the frame of the steering wheel. So again, the implementation of this charge-discharge method of capacitive sensing potentially compromises safety.
It is true that other technologies are already applied to other driver-monitoring functions. For instance, 2D optical sensing is in use in systems for monitoring the position of the driver’s head. However, these 2D optical-sensing systems are not capable of performing HOD. This means that capacitive sensing is the most viable technology for HOD that is ready for deployment today. And now ams has a new approach to capacitive sensing which will meet all the safety requirements imposed by the automotive industry, and which is simple to implement.
Better performance, lower cost
This new solution from ams provides better performance, and with fewer components than the existing charge-discharge technique for capacitive sensing.
By implementing reliable capacitive sensing based on I/Q demodulation, the AS8579 capacitive sensor performs HOD in a way which cannot be fooled. Like the charge-discharge method, I/Q demodulation is a proven and well-known technique for capacitive sensing. Its advantage is that it measures the resistive as well as the capacitive element of a system’s impedance. The effect of this is that, unlike the charge-discharge method, it works reliably in difficult conditions, such as in the presence of moisture, or when the driver is wearing gloves. And it cannot be fooled, so provides for assured detection of the driver’s grip on the steering wheel. And the added benefit of the AS8579-based solution is that it can operate via a heated steering wheel’s heater element, so it does not require a separate sensor element to be built into the steering wheel.
This is how the AS8579 eliminates the normal trade-offs in engineering design:
It performs better – it cannot be fooled, and it operates in all conditions
It costs less – it is a single-chip solution, and requires no dedicated sensing element in a heated steering wheel
It is easy to implement – the chip’s output is an impedance measurement, and the system controller simply applies a threshold value to determine whether hands are on the steering wheel or not.
Ready for use in automotive designs
The AS8579 is fully automotive qualified, and offers multiple on-chip diagnostic functions, ensuring support for the ISO 26262 functional safety standard up to ASIL Grade B. Operating at one of four selectable driver-output frequencies – 45.45kHz, 71.43kHz, 100kHz or 125kHz – the AS8579 offers high immunity to electromagnetic interference.
Andreas Zenz joined ams in 2013. Since then he has worked in application engineering for automotive, industrial, medical and robotics customers. In addition, he has taken on the product management role for the AS8579 automotive-qualified capacitive sensor.
The field of data science is rapidly changing as new and exciting software and hardware breakthroughs are made every single day. Given the rapidly changing landscape, it is important to take the appropriate time to understand and investigate some of the underlying technology that has shaped and will shape, the data science world. As an undergraduate data scientist, I often wish more time was spent understanding the tools at our disposal, and when they should appropriately be used. One prime example is the variety of options to choose from when picking an implementation of a Nearest-Neighbor algorithm; a type of algorithm prevalent in pattern recognition. Whilst there are a range of different types of Nearest-Neighbor algorithms I specifically want to focus on Approximate Nearest Neighbor (ANN) and the overwhelming variety of implementations available in python.
My first project with my internship at GSI Technology explored the idea of benchmarking ANN algorithms to help understand how the choice of implementation can change depending on the type and size of the dataset. This task proved challenging yet rewarding, as to thoroughly benchmark a range of ANN algorithms we would have to use a variety of datasets and a lot of computation. This would all prove to provide some valuable results (as you will see further down) in addition to a few insights and clues as to which implementations and implementation strategies might become industry standard in the future.
What Is ANN?
Before we continue its important to lay out the foundations of what ANN is and why is it used. New students to the data science field might already be familiar with ANN’s brother, kNN (k-Nearest Neighbors) as it is a standard entry point in many early machine learning classes.
kNN works by classifying unclassified points based on “k” number of nearby points where distance is evaluated based on a range of different formulas such as Euclidean distance, Manhattan distance (Taxicab distance), Angular distance, and many more. ANN essentially functions as a faster classifier with a slight trade-off in accuracy, utilizing techniques such as locality sensitive hashing to better balance speed and precision. This trade-off becomes especially important with datasets in higher dimensions where algorithms like kNN can slow to a grueling pace.
Within the field of ANN algorithms, there are five different types of implementations with various advantages and disadvantages. For people unfamiliar with the field here is a quick crash course on each type of implementation:
Brute Force; whilst not technically an ANN algorithm it provides the most intuitive solution and a baseline to evaluate all other models. It calculates the distance between all points in the datasets before sorting to find the nearest neighbor for each point. Incredibly inefficient.
Hashing Based, sometimes referred to as LSH (locality sensitive hashing), involves a preprocessing stage where the data is filtered into a range of hash-tables in preparation for the querying process. Upon querying the algorithm iterates back over the hash-tables retrieving all points similarly hashed and then evaluates proximity to return a list of nearest neighbors.
Graph-Based, which also includes tree-based implementations, starts from a group of “seeds” (randomly picked points from the dataset) and generates a series of graphs before traversing the graphs using best-first search. Through using a visited vertex parameter from each neighbor the implementation is able to narrow down the “true” nearest neighbor.
Partition Based, similar to hashing, the implementation partitions the dataset into more and more identifiable subsets until eventually landing on the nearest neighbor.
Hybrid, as the name suggests, is some form of a combination of the above implementations.
Because of the limitations of kNN such as dataset size and dimensionality, algorithms such as ANN become vital to solving classification problems with these kinds of constraints. Examples of these problems include feature extraction in computer vision, machine learning, and many more. Because of the prominence of ANN, and the range of applications for the technique, it is important to gauge how different implementations of ANN compare under different conditions. This process is called “Benchmarking”. Much like a traditional experiment we keep all variables constant besides the ANN algorithms, then compare outcomes to evaluate the performance of each implementation. Furthermore, we can take this experiment and repeat it for a variety of datasets to help understand how these algorithms perform depending on the type and size of the input datasets. The results can often be valuable in helping developers and researchers decide which implementations are ideal for their conditions, it also clues the creators of the algorithms into possible directions for improvement.
Bernhardsson, who built the code with help from Aumüller and Faithfull, designed a python framework that downloads a selection of datasets with varying dimensionality (25 to nearly 28,000 dimensions) and size (few hundred megabytes to a few gigabytes). Then, using some of the most common ANN algorithms from libraries such as scikit-learn or the Non-Metric Space Library, they evaluated the relationship between queries-per-second and accuracy. Specifically, the accuracy was a measure of “recall”, which measures the ratio of the number of result points that are true nearest neighbors to the number of true nearest neighbors, or formulaically:
Intuitively recall is simply the correct predictions made by the algorithm, over the total number of correct predictions it could have made. So a recall of “1” means that the algorithm was correct in its predictions 100% of the time.
Using the project, which is available for replication and modification, I went about setting up the benchmark experiment. Given the range of different ANN implementations to test (24 to be exact), there are many packages that will need to be installed as well as a substantial amount of time required to build the docker environments. Assuming everything installs and builds as intended the environment should be ready for testing.
Our results (top) and Bernhardsson’s results (bottom):
As you can see from the two side by side plots of algorithm accuracy vs algorithm query speed there are some differences between my results and Bernhardsson’s. In our case, there are 18 functions plotted as opposed to 15 in the other. This is likely because the project has since been updated to include more functions following Bernhardsson’s initial tests. Furthermore, the benchmarking was performed on a different machine to Bernhardsson’s which likely produced some additional variability.
What we do see which is quite impressive is that many of the same algorithms that performed well for Bernhardsson also performed well in our tests. This suggests that across multiple benchmarks there are some clearly well-performing ANN implementations. NTG-onng, hnsw(nmslib) and hnswlib all performed exceedingly well in both cases. Hnsw(nmslib) and hnswlib both belong to the Hierarchical Navigable Small World family, an example of a graph-based implementation for ANN. In fact, many of the algorithms tested, graph-based implementations seemed to perform the best. NTG-onng is also an example of a graph-based implementation for ANN search. This suggests that graph-based implementations of ANN algorithms for this type of dataset perform better than other competitors.
In contrast to the well-performing graph-based implementations, we can see BallTree(nmslib) and rpforest both of which in comparison are quite underwhelming. BallTree and rpforest are examples of tree-based ANN algorithms (a more rudimentary form of a graph-based algorithm). BallTree specifically is a hybrid tree-partition algorithm combining the two methods for the ANN process. It is likely a series of reasons that cause these two ANN algorithms to perform poorly when compared to HNSW or NTG-onng. However, the main reason seems to be that tree-based implementations execute slower under the conditions of this dataset.
Although graph-based implementations outperform other competitors it is worth noting that graph-based implementations suffer from a long preprocessing phase. This phase is required to construct the data structures necessary for the computation of the dataset. Hence using graph-based implementations might not be ideal under conditions where the preprocessing stage would have to be repeated.
One advantage our benchmark experiment had over Bernhardsson’s is our tests were run on a more powerful machine. Our machine (see appendix for full specifications) utilized the power of 2 Intel Xeon Gold 5115’s, an extra 32 GBs of DDR4 RAM totaling 64 GBs, and 960 GBs of solid-state disk storage which differs from Bernhardsson’s. This difference likely cut down on computation time considerably, allowing for faster benchmarking.
A higher resolution copy of my results can be found in the appendix.
Conclusion and Future Work
Overall, my first experience with benchmarking ANN algorithms has been an insightful and appreciated learning opportunity. As we discussed above there are some clear advantages to using NTG-onng and hnsw(nmslib) on low dimensional smaller datasets such as the glove-25-angular dataset included with Erik Bernhardsson’s project. These findings, whilst coming at an immense computational cost, are none the less useful for data scientists aiming to tailor their use of ANN algorithms to the dataset they are utilizing.
Whilst the glove-25-angular dataset was a great place to start I would like to explore how these algorithms perform on even larger datasets such as the notorious deep1b (deep one billion) dataset which includes one billion 96 dimension points in its base set. Deep1b is an incredibly large file that would highlight some of the limitations as well as the advantages of various ANN implementations and how they trade-off between query speed and accuracy. Thanks to the hardware provided by GSI Technology this experiment will be the topic of our next blog.
Aumüller, Martin, Erik Bernhardsson, and Alexander Faithfull. “ANN-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms.” International Conference on Similarity Search and Applications. Springer, Cham, 2017.
Liu, Ting, et al. “An investigation of practical approximate nearest neighbor algorithms.” Advances in neural information processing systems. 2005.
Malkov, Yury, et al. “Approximate nearest neighbor algorithm based on navigable small-world graphs.” Information Systems 45 (2014): 61–68.
Laarhoven, Thijs. “Graph-based time-space trade-offs for approximate near neighbors.” arXiv preprint arXiv:1712.03158 (2017).
LIDAR plays a major role in automotive, as vehicles perform tasks with less and less human supervision and intervention. As a leader in VCSEL, ams is helping to shape this revolution.
LIDAR (Light Detection and Ranging) is an optical sensing technology that measures the distance to other objects. It is currently known for many diverse applications in industrial, surveying, and aerospace, but is a true enabler for autonomous driving. As the automotive manufacturers continue their push to design and release high-complexity autonomous systems, we likewise develop the technology that will enable this. That is why ams continues to bring our high-power VCSELs to the automotive market and to test the limits on peak power, shorter pulses, and additional scanning features which enable our customers to improve their LIDAR systems.
In 2019, ams together with ZF and Ibeo announced a hybrid solution called True Solid State where, like flash technology, no moving parts are needed to capture the full scene around the vehicle. By sequentially powering a portion of the laser, a scanning pattern can be generated, combining the advantages of flash and scan systems.
Making sense of the LIDAR landscape
At ams, we classify LIDAR systems on seven elements: ranging principle, wavelength, beam steering principle, emitter technology and layout, and receiver technology and layout. Here we discuss the first five.
The most dominant implementation to measure distance (ranging) is Direct Time of Flight (DTOF): a short (few nanoseconds) laser pulse is emitted, reflected by an object and returned to a receiver. The time difference between sending and receiving can be converted into a distance measurement. Moreover, with duty cycles of <1% this system takes thousands of distance measurements per second. The laser pulse is typically in the 850-940nm rage, components are readily available and most affordable. However, systems can also be using 1300 or 1550nm, the big advantage is eye safety regulations allow more energy to be used here, and in theory, this provides more range. The downside is that components are expensive.
To scan the complete surroundings (or field of view) of a vehicle, the system needs to be able to shoot pulses in all directions. This is the beam steering principle. Classical systems used rotating sensor heads and mirrors to scan the field of view section by section. As these systems are bulky, they are being replaced by static systems with internal moving mirrors. MEMS mirrors are also about to enter the market. Another approach is flash, where no moving parts are needed at all. The light source illuminates the complete field of view, and the sensor captures that same field in a single frame like a photo. As the full scene is illuminated, and to remain eye safe, this means the range must be limited.
On the emitter side, edge emitters continue to be frequently used, based on earlier developments. They have a high-power density, making them suitable in combination with MEMS mirrors. Where first iterations were single emitters, meanwhile 2-4-8-16 emitters are being integrated in a single bar. Fiber lasers are another interesting technology. They offer even higher power density, and typically are used in 1550nm wavelength and come typically as a single emitter source.
ams is a leading supplier in the VCSEL emitter technology. Our high power VCSELs can differentiate in scan and flash applications as they are very stable over temperature, are less sensitive to individual emitter failures, and are easy to integrate. However, the best characteristic of VCSELs are their ability to form emitter arrays. This makes VCSELs easy to scale. It also allows for addressability, or powering selective zones of the die. This enables True Solid State topology, which we consider to be the most all-rounded LIDAR solution.
LIDAR enables Autonomous Driving
The most commonly accepted way to classify vehicles on their level of autonomy is by the definitions of the Society of Automotive Engineers (SAE). At SAE Level 3 and above, the vehicle takes over responsibility from the driver and assistance turns into autonomy. This means the vehicle should be able to perform its task without human supervision and intervention. This requires a step function in required system performance. Where Level 1 and Level 2 vehicles assist the driver and typically rely on camera or radar, or a combination, there are shortcomings in these technologies for 3D object detection. LIDAR technology addresses this, and there is wide consensus in the industry that from Level 3 onwards, LIDAR is needed for 3D object detection.
When 3D LIDAR is combined or fused with camera and radar, a high-resolution map of the vehicle’s surroundings can be constructed and allow the vehicle to safely fulfil its mission. The automotive industry started with more straightforward driver-assist use cases used in Level 1 and Level 2. As sensors and data processing gets more advanced, further more difficult use cases can be covered, such as Highway Pilot or City Pilot.
Ultimately, when every conceivable use case can be fulfilled by the system we define this as a Level 5 vehicle – fully autonomous and the holy grail of autonomous driving. This is expected to still be quite a number of years out from today. Moreover, there will be huge pressure to bring down cost and rationalize content per vehicle – to make autonomous driving available to the mass market.
Interested to learn more?
Let us know if you would like to discuss how you could be using ams technology to support your potential LIDAR applications! Contact ams sensor experts