Presentation at SSI International 2021 are grouped into 4 key themes which collectively provide complete coverage of the global sensors industry.
If you are interested in speaking at SSI International 2021, please contact info@sensorsinternational.net or call +44 (0)2476 718 970.
The applications of LIDAR are so broad that seem to be only limited by human imagination. In recent (and not so recent) years we have seen the constant consideration and deployment of LIDARs for distance measurements in agriculture, archaeology, automotive, geology, atmospheric monitoring, law enforcement, military, mining, astronomy,.. But every application has their own by their system integrators, with different degrees of difficulty and maturity. In particular, in the automotive industry, some OEMs envision to place LIDARs in headlights and taillights, bumpers, roof, and other places in the car, at low cost, high durability, low maintenance, high stability, etc. Each of these locations with different required specifications. The goal of this panel is to bring together different TIER1 suppliers and OEMs from different market segments and applications, and discuss the challenges and opportunities for the LIDAR supply chain.
As autonomous vehicles (AV) are closer to becoming reality, it becomes mandatory to be able to characterise the performance of the sensors before production. This analysis calls for large amounts of ground truth data to be established with precise information about the real world position and pose of the objects around the vehicle. In this talk, we would like introduce to AI enabled sensor data analytics tool realized with novel deep learning techniques with support for plethora of AV sensors like RADAR,LIDAR ,Camera etc., which can help to reduce significant manual efforts thus reducing the production time of AV.
AV’s face many challenging situations as they strive towards L4/L5 operation. Difficult “edge cases” such as unprotected left turns, debris in the road and partially obstructed views pose significant challenges to perception teams seeking to quickly detect and identify objects. Ultra-High Resolution, FMCW LiDAR with its high sensitivity and Doppler velocity measurement is helping solve these difficult cases enabling faster, more accurate object detection and classification.
In recent years there has been increasing interest in sensor technologies across a wide range of applications. Whilst industrial and consumer applications have driven the first wave of sensor activities, the acceleration of connected autonomous vehicles is focussing attention on advanced sensor technologies such as LiDAR. Dr Furlong's presentation shines a light on the key materials developments that provide the underpinning technologies that are essential to make LiDAR enabled autonomous vehicles a reality in terms of both cost and performance.
Today’s LiDAR solutions are either too expensive or don’t meet the demanding performance requirements for automotive applications, which include long range, wide field of view, high resolution and frame rate at aggressive cost, size, weight and power (C-SWAP) targets -- all while meeting rigorous automotive reliability standards. This talk analyzes the underlying physics governing the behavior of LiDAR systems and derives important insights into the design imperatives required to deliver viable products for the automotive market. We will present these insights and demonstrate how Lumotive can build a high-performance, low-cost LiDAR system based on Liquid Crystal Metasurface technology that utilizes proven silicon manufacturing processes to enable a solid-state, reliable sensor.
In-sensor fusion has emerged as an important development for perception systems. By distributing perception capabilities to the edge of the network, critical real-time information—including object classification, dimensionality and velocity—can be obtained at the point of acquisition, streamlining AV path-planning and decision-making. This talk will explore edge perception’s role in reducing latency, lowering costs, and securing functional safety.
We define Situation Awareness by the ability to simultaneously perform Perception, Localisation and Comprehension of the environment.This ability is fundamental to any Smart Machine such as a self-driving car: it is a prerequisite to take action in the real world. The initial approach of the industry to solve this key challenge has been to increase the capabilities of separated sensors like LiDAR, Cameras, Radar, IMUs; combine them via Sensor Fusion and process the resulting data using Machine Learning (ML) on powerful computing platforms. However, the Scalability and Reliability levels required for mass-produced Smart Machines can’t be achieved without solving the new challenges that this approach creates, including the problems of Calibration, Synchronisation, Latency, Energy consumption, Cost and “explainability”. We will introduce in the presentation a new technical solution and the working principles of its main components: The combination of a novel broadband laser imager with an innovative processing approach that allows for an industry’s first real-time Full Situation Awareness in a standalone device, including the identification of the Material composition of objects (Skin, plastic, metal...) We'll explore how this new technology will not only accelerate the emergence of fully automated Smart Machines like L4-L5 Self-Driving Cars, Robots, and Autonomous Flying Taxis but will also bring the safety benefits of Full Situation Awareness to the current man-controlled machines like L1-L3 ADAS (Advanced Driving Assistance Systems), Construction/Mining equipment, Helicopters and many more.
According to market research, within the next five to ten years, the largest segment within the automated vehicle market will be held by autonomous shuttles. Core to their mass adoption are safety considerations, both for passengers as well as for the vulnerable road users in the immediate vicinity of the shuttle. How are these safety requirements achieved? Many of today’s perception platforms are based on sensor fusion, wherein different technologies are paired for optimal results. Most experts agree that LiDAR technology is an integral piece of the solution. As the next generations of autonomous shuttles become more widely adopted, Flash LiDAR solutions, which are already being leveraged today, will become a central part of the sensor suite. Join us to explore the road ahead. • About LeddarTech, Focus Markets, Automotive & Mobility Markets • Urban Mobility Challenges, The Transformation of Urban Environments, Autonomous Shuttles as a Solution • Mobility and ADAS Challenges, Anatomy of an Autonomous Shuttles, Today’s ADAS Systems and what Can be Improved, New Technologies that are Ready, Flash LiDAR, Main Concerns • Sensors - Key Component of Autonomous Shuttles, Technology Mix, Safety Cocoon with 3D Flash LiDAR showing the example of LeddarPixell • Real World Use Cases, Collision Avoidance and VRU (Vulnerable Road User) Protection
We have invented a new method to measure distance with light. In contrast to the standard method, this hybrid method is always up to 50% cheaper with better performance. In addition, we are able to drastically reduce the chip area. This enables us to build the next generation of Lidar, our Lissa: 4D-True-Solid-State-Lidar.
ficonTEC is probably best known for supplying assembly systems for a good proportion of the world’s transceivers. But markets have been diversifying, and photonics-based sensors have been catching our attention for some time. LiDAR, automotive and MedTech are themes that feature strongly amongst the projects we are currently tackling and that cover the breadth of our 'Photonics from Lab to Fab' maxim well.
One of the most basic challenges for ADAS and AV is the ability to operate in all weather and lighting conditions. Increasingly, sensing solution architects are realizing existing sensor fusion solutions (including radar, lidar, and standard cameras) are unable to detect and recognize potential hazards under common low-visibility conditions: night-time, fog, haze, etc. Meaning machine vision algorithms are unable to make reliable and safe driving decisions. TriEye is breaking the sensor fusion status-quo with a CMOS-based Short-Wave Infrared (SWIR) HD-camera. Based on advanced nanophotonics research, enabling fabrication of low-cost SWIR sensors at scale, solving the low visibility challenge for OEMs and T1s.
Silicon has transformed the consumer tech industry over the past few decades by enabling higher performance while lowering costs. Ouster has engineered the optimal lidar by using a unique combination of silicon CMOS detectors and SPADs/VCSELs, and custom designing silicon ASICs. These components are on an exponential improvement curve, much like Moore's Law, and are still far from maturity with potential for 10x improvement in just a few years. This talk will dive into the implications of introducing silicon to lidar, expected improvements, and performance roadmap.
Autonomous vehicles need to rely on their visual perception of the road course, other road users, and traffic signs etc. at all times. Image sensors and thus automotive cameras often struggle with scenes of mixed brightness, such es entries and exits of tunnels, bridge underpasses, or low frontal sunlight. To capture all required image details in very dark and very bright areas, cameras need to show a so called High Dynamic Range (HDR). This presentation will introduce the state of the art in High Dynamic Range imaging based on practical experiments with latest automotive image sensors from leading OEMs.
In the past, LiDAR systems struggled with a number of problems: they were lacking in efficiency and robustness and were far too expensive for the automotive mass market. LiDAR sensors based on MEMS mirrors are a promising solution. However, MEMS mirrors available today have small mirror sizes of a few millimeters. Their performance in terms of range and field of view is therefore limited. Blickfeld has developed its own MEMS mirrors. With generous dimensions of more than 10 millimeters mirror diameter, which enables better performance. The mirror size is determined by various factors, which Blickfeld will discuss in this talk.
IHS Markit will provide an overview of the different flavours of LiDAR , the supply chain, commercial aspects and the potential for each. Further, IHS Markit research will benchmark different LiDAR implementation approaches regarding cost, maturity timeline, and integration with other sensors. A review of the current LiDAR supply chain will also be given covering the big players as well as some promising start-ups.
Digital olfaction mimics the human sense of smell by capturing odor signatures for display and analysis. By integrating digital olfaction sensors into wearable technology, we can use odor data for a host of powerful consumer health use cases—from enhanced stress indicators to hygiene monitoring. A silicon photonic solution functionalized with biosensors is sensitive enough to distinguish hundreds of odors while reducing sensor size and costs. This low cost, high volume solution is enabling innovation in consumer and digital health applications.
Functional inks and printed electronics solutions are offered by Henkel for many years to enable production of traditional membrane switches and more recent smart sensors. Using printed circuits, this can be done in an easy and economically attractive way. Application areas widely vary from healthcare, industry 4.0, automotive, aerospace, consumer goods and many more. In this presentation the possibilities with printed electronics and the use of functional inks will be further illustrated by a few of the many applications already on the market today.
Integration of multiple photonics functions on Photonic Integrated Circuits (PICs) brings significant reduction in size, weight and costs compared to bulk optics and discrete implementations, as such enabling innovative disruptive applications in many markets, including healthcare. The presentation will give an overview of the applications where a PIC-based sensing solution is expected to deliver most value and will zoom in on some of the most promising opportunities addressed within PhotonDelta such as a.o. low-cost OCT devices for Point-of-care, low-cost and rapid biomarker analysis for application in Life Science research and Point-of-care Diagnostics, Fiber-Optics-Sensing enabled applications for improved in-vivo diagnostics and treatment.
Remote home monitoring via wearables is rapidly gaining popularity, thanks to ever more complex vital signs recording and data analytics that are embedded into single devices. True system-on-chips (SoCs) push the envelope of power and form factor. While wearables are becoming more common-place, novel health sensing paradigms appear. Non-contact sensing technologies enable vital signs sensing without requiring any physical contact to the human body, while ingestible sensing technology could potentially provide a holistic view of the human GI system and metabolic health. This talk will focus on technological innovations needed in this space.
Piezo (PVDF polymer) film is a lightweight and flexible transducer material that offers very high sensitivity to dynamic strain. Since the dynamic range and operating bandwidth of the material are extremely wide, transducers based on piezo film can offer signals uniquely rich in information content. Piezo film sensors can be used to detect body presence and movement, heart and lung sounds, pulse and respiratory rate, instructed and involuntary muscle activity, sleep monitoring, and more – all without consuming power. This talk will illustrate many of these human body contact applications with example signal data.
Awaiting presentation abstract.
For decades, MEMS and sensor engineers have relied on expensive and slow desktop simulation tools to simulate and optimize MEMS and sensor devices. Legacy desktop simulation is slow, expensive, and doesn't yield highly accurate results in 3D. Today, more leading MEMS and sensor companies are turning to the power of Cloud Engineering Simulation - advanced 3D multiphysics solvers running on cloud supercomputers - to implement true "digital prototyping" R&D processes. Digital prototypes are complete digital representations of a physical device and yield all of the bench-top test data an engineer would expect from a real prototype, but delivered at a fraction of the time and cost of a physical prototype. Learn how market-leading MEMS and sensor companies are minimizing risk, cost, and time-to-market for new designs by leveraging Cloud Engineering Simulation and Digital Prototypes.
In this talk, Renovo’s Dennis Hamann will discuss current market innovations on capturing, analyzing and leveraging the data needed to bring ADAS features like self-driving, lane assist, self-parking and more to mainstream vehicles. The current influx of data that OEMs have to sort, rank, analyze, and deploy is extremely large and Dennis Hamann will explain what technologies OEMs are using to speed the innovation and safe deployment of these features.
Awaiting presentation abstract.
While hardware and especially MEMS sensors will remain a crucial part in CE and IoT devices, intelligence provided by smart algorithms within the sensors, that means at the very edge of the system, will gain more and more importance. In his presentation Wolfgang Schmitt-Hahn will introduce specific use case examples and will also give an outlook of the role of software and AI within MEMS sensors for current and new applications.
Awaiting panel session abstract.
High-end inertial sensors are widely used in harsh and demanding environments such as industry, commercial aerospace, naval, space and defense applications, where excellent navigation and positioning is needed for critical missions. Due to the growth of all kinds of autonomous systems and for use especially in GPS-denied environments, inertial sensors have been ever so pertinent. Increased military spending from US and also geopolitical tensions in middle east, could drive a slightly higher demand for autonomous aerial, naval and land systems (UAV, UGV, LAV, MAV, ROV, etc) where high-end inertial sensors could benefit. In this presentation, Yole will provide an overview of the market and technologies for high end inertial sensors as well as market and technology trends for accelerometers, gyroscopes and IMU/AHRS/INS (for example photonic FOG, IMU integration in robotic cars, etc).
This presentation will describe the requirements and challenges in designing Industrial IoT wireless sensing nodes for predictive maintenance and data acquisition intended to operate over wide temperature ranges. A review of the components integrated in these autonomous devices will be given, with emphasis on the power elements. The requirement of the energy storage element will be discussed in terms of long life (desire to avoid changing batteries regularly), small size (need for smaller non-obtrusive devices), energy density (to provide to ever increasing functionality) and operating temperature (considering operations up to 125°C). Various solutions will be provided and compared depending on the use case, including cabling, conventional batteries, supercapacitors, solid state batteries and energy harvesting.
Fiber optics are replacing traditional sensors in many applications thanks to their noise immunity as well as their size, weight and power (SWaP) advantages that outperform strain gauges and thermocouples. A single fiber optic sensor can deliver data from all along its length compared to the point-specific data collected by conventional sensors. Fiber optics can measure strain, temperature, pressure, vibration and more for applications involving the most extreme conditions such as inside engines, power plants, nuclear reactors, aircraft, or in space, conflict zones and across heavy industries. This Sensuron presentation will describe how engineers are utilizing Optical Frequency Domain Reflectometry (OFDR) fiber optic strain measurements to derive distributed shape. OFDR offers the ability to acquire strain measurements continuously along the length of an optical fiber. Coupled with fiber’s flexible routing options, this allows the capture of various data components of strain continuously or quasi-continuously along the length of substrates to which fibers are bonded. Sensuron has successfully applied its technologies in programmes working with NASA, SpaceX, Airbus, Boeing, Virgin Galactic, the Ministry of Defence and Tel Aviv University amongst many.
Awaiting presentation abstract.
Dimensions are key for any product in consumer electronics and in particular for sensors this very often translates to smallest form factors. These ever shinking sensor sizes have moved many products towards wafer level manufacturing. The same is true for optical sensors and nanoimprint technology plays and essential role in this development. However, for the nanoimprint lithography it is obvious that it is not only about device dimensions. This technique e very efficient way replicate directly replicate pattern sizes down to the nanometerscale into polymer. This enables to mass manufacture a wide variety of optical components even with most complex design on wafer level.
The road for optical sensors to enter mainstream acceptance has been, for the lack of at better expression, ‘bumpy.’ Solid-state light sources and detectors are thought to behave non-deterministically, which is an engineering nightmare! Despite challenges, we increasingly see ways in which optical sensors now play a vital role in security and other safety-critical applications such as facial recognition, LiDAR, gas sensing and structural monitoring. Broadcom (Formerly Avago Technologies) has for several decades integrated solid-state sensing into the harshest of environments and extreme applications including heavy industry, security, defense, energy generation and many others. We will discuss ways that this expertise benefits ongoing development and integration efforts with highlighted case studies pointing to greater performance enhancements and energy savings made possible with the creative application of optical sensing technologies.
Lightricity will present a ultra-high efficient photovoltaic energy harvesting technology that can convert natural and artificial light with over 30% efficiency, even under harsh environments. We will show how this unique technology can provide sufficient electrical power to a wide range of wireless IoT devices and sensors, and how it can be implemented into challenging applications where efficiency, mechanical robustness, and resistance to elevated temperatures (>200C) is paramount. The presentation will also cover specific use cases and on-going projects for network rail monitoring, medical devices and aerospace.