Friday, May 17, 2019

Human Factors in Aviation Essay

A large number of f giddy accidents go a pertinacious mostly due to lack of efficient wad of the surrounding environment. Traditional visionary systems rely on artificial vision or specifically vision of the existing environment devoid of mist, fog and new(prenominal) abnormalities. Real scenarios require the ability to provide reliable vision overcoming natural hindrances. Humans learnt the art of momentary when they abandoned the idea of flapping of wings. Similarly, the latest developments of enhanced vision systems hit sidestepped the existing traditional vision systems to ensure flying safety.In recent years, Controlled Flight into Terrain (CFID) has posed a authoritative risk in both civilian and military aviation. One of the aviations worst accident occurred in Tenerife, when devil Boeing 747s collided as one aircraft was attempting to take off while the other was to land. The risk of CFID heap be greatly reduced with the aid of a suite of Radar and collision avoidan ce equipment commonly limitinal figureed as heighten Vision systems (EVS). Rationale One of the primary causes for many runway accidents is reduced visibility.One upshot to this limitation lies in the use of invisible sensing in aviation operations. All objects on primer emit infr bed radiation and their emissions and possesss can be detected through total darkness as well as intervening mist, rain, haze, smoke and other scenarios, when the objects are invisible to the human eye (Kerr, 2004). The freshman EVS system was targeted for production in 2001 as standard equipment on Gulf Stream GVSP aircraft. The system was real in part by Kolesman Inc under the technology license from Advanced Technologies, Inc.utilization of EVS addressed unfavourable areas like CIFT avoidance, general safety sweeteners during approach, landing and take off, amend detection of trees, power caudexs and other prohibitions, improved visibility in brown out conditions, improved visibility in haz e and rain, identify broken in and sloping terrain and detect runway incursions. Enhanced Vision Systems Enhanced visibility system is an electronic elbow room to provide a display of the forward external scene topology through the use of infrared imaging sensors.They are a combination of near term designs and long term designs. Near term designs present sensor compassry with super-imposed flight symbology on a Head up display (HUD) and whitethorn include such enhancements as runway out edges, other display argumentations like obstacles, taxiways and flight corridors. capacious term designs include complete replacement of the out-the window scene with a combination of electro optical and sensory knowledge. infrared Sensors EVS uses Infrared (IR) sensors that detect and measure the levels of infrared radiation emitted continuously by all objects.An objects radiation level is a function of its temperature with warmer objects emitting more radiation. The infrared sensor measures these emission levels which are indeed processed to produce a thermal emblem of the sensors forward content of resume. EVS IR sensors operate in the Infrared spectrum (Kerr, 2004). The different types of spectrum are Long seethe IR, Medium wave IR and Low wave IR. Two variants of this technology are currently in aircraft use. A bingle sensor unit operating in the long wave, maximum persist penetration band has significant far penetrating capability.Short wave sensors have the ability to enhance the acquisition of runway lighting. A dual sensor variant composed of short and long wave bands used for both light and weather penetration fuses both sensor range of a functions for a full spectrum view. Image sensors operating in long wave Infrared spectrum are Cyro-cooled. Models of EVS One of the commonly used EVS systems is EVS 2000. The operation of the model EVS 2000 dual physique sensor is given in figure 1. Long joggle Infrared sensor provides go around weather penetratio n, ambient background and terrain features.Similarly, the Short Wave Sensor provides best detection of lighting, runway outline and obstacle lights. The signal processor combines the mountain ranges of both the sensors to display a fused image picturizing the current environment (Kerr, Luk, Hammerstrom, and Misha, 2003). (Source Kerr et al, 2003) Boeing Enhanced Vision System Boeings EVS enhances situational awareness by providing electronic and real time vision to the flys. It provides information at low level, night time and moderate to heavy weather operations during all phases of flight.It has a series of imaging sensors, navigational terrain database with a virtual pathway for approach during landings, an EVS image processor and a wide landing field of view, C-through helmet mounted display integrated with a head tracker. It also consists of a synthetic vision system accompanying the EVS to present a computer sacrificed image of the out-the window view in areas that are n ot covered by the imaging sensors of the EVS. The EVS image processor performs the following 3 functions.It compares the image scanned by the ground mapping Radar and the MMW sensor with a database to present a computer generated image of the ground terrain conditions. It is accompanied by a Global Positioning System (GPS) to provide a posture map during all phases of flight. The IR imaging sensors provide a thermal image of the front line of view of the aircraft. Typical HUD symbology including altitude, air speed, pressure, etc is added without any obscuration of the underlined scene. The SV imagery provides a cardinal dimensional view of a clear window site with reference to the stored on board database.Figure 2 gives the Boeings EVS/SV integrated system. The projection of SV data should be confirmed by the EVS data so that the images register accurately. The system provides for three basic views i. e. , flight to view or the normal view, the map views at different altitudes or ranges and the orbiting view or an exocentric/ownership from any orbiting location from the vehicle (Jennings, Alter, Barrow, Bernier and Guell, 2003). (Source Jennings et al, 2003) EVS Image treat and Integration necktie Engine ApproachThis is a neural net inspired self organizing associating memory approach that can be utilizeed in FPGA based boards of moderate cost. It constitutes a very efficient implementation of best match association at high real time video rates. It is highly robust in the face of noisy and obscured image inputs. This means of image histrionics emulates the human visual pathway. A preprocessor performs the feature extraction of edges as well as potentially higher levels of abstraction in order to generate a large, sparse and random binary vector for each image frame.The features are created by looking for for 0 crossings after filtering with a laplacian of guassian filter and thereby finding edges. Each edge image is then thresholded by taking the K st rongest features setting those to 1 and all others to 0. For multiple images, the feature vectors are arrange together to create a composite vector. The operations are performed over a range of multi closedown hyper pixels including those for 3-D images. FPGA provides a complete solution by offering the necessary memory bandwidth, significant symmetry and low precision tolerance.Figure 3 provides an illustration of an association locomotive engine operation (Kerr et al, 2003). Fig 3 Association Engine Operation (Source Kerr et al, 2003) DSP Approach One approach to perform multi sensor image enhancement and fusion is the Retinex algorithm evolved at the NASA Langley research center. Digital signal processors from Texas instruments have been used to successfully implement a real-time version of Retinex. C6711, C6713 and DM642 are some of the commercial digital signal processors (DSP) used for image processing.Image processing which is a subset of digital signal processing enables fusion of images from various sensors to aid in efficient navigation. Figure 4 EVS Image Processing (Source Hines et al, 2005) Image processing architecture and functions of EVS, Long Wave Infrared (LWIR) and Short Wave Infrared (SWIR) processing can be done simultaneously. The multi spectral data streams are registered to remove field of view and spatial resolution differences between the cameras and to correct inaccuracies.Registration of Long Wave IR data to the Short Wave IR is performed by selecting SWIR as the base line and applying affine transform to the LWIR imagery. LaRC patented Retinex algorithm is used to enhance the information content of the arrogated imagery particularly during poor visibility conditions. The Retinex can also be used as a fusion engine since the algorithm performs nearly symmetrically processing on multi-spectral data and applies multiple scaling operations on each spectral band.The fused video stream contains more information than the individual spectral bands and provides the pilot a single output which can be interpreted easily. Figure 4 illustrates the various processing stages in fusing a multi spectral image (Hines et al, 2005). Design Tradeoffs LWIR based single image system is no panacea for fog, only if reduces hardware requirements. It is also a low cost solution with lower resolution. An image fusion system provides active penetration of fog and better resolution but comes at a higher cost.Increasing the bandwidth provides better size and angular resolution and satisfactory atmospheric transmission but costs high. Basic diffraction physics limits the true angular resolution but can be cudgel by providing sufficient over sampling. Sensitivity vs. update rate and physical size vs. resolution have traditionally been issues with passive cameras. Fortunately, dual mode sensors overcome these trade offs (Kerr et all, 2003). A successful image capture of landing scenario is given in figure 5. Figure 5.EVS view Vs. Pilo ts view (source Yerex, 2006) Human Factors Controlling the aircraft during the spotless period of flight is the sole responsibility of the pilot. The pilot seeks guidance from the co-pilot, control tower and inbuilt EVS to successfully steer the aircraft. The pilot controls the aircraft based on a representation of the world displayed in the cockpit given by the inbuilt systems and may not see the actual out-the-window visual scene. Visual information is presented but may not otherwise be visible.Some of the information may be lost due to limitations of resolution, field of view or spectral sensitivities. Therefore, with EVS, the world is not viewed directly but as a representation through sensors and computerized databases. More importantly, the essential data for pilotage should be available on the display. Though EVS systems gives a representation of the exact view of the flight environment, its accuracy plays a significant role in flight safety. Thus human factor are vital for flight control.

No comments:

Post a Comment