ARTIFICIAL NEURAL NETWORKS AND CASE STUDY

Deeksha Gautam
7 min readMar 17, 2021

What Are Neural Networks ?

Neural networks are a series of algorithms that mimic the operations of a human brain to recognize relationships between vast amounts of data. They are used in a variety of applications in financial services, from forecasting and marketing research to fraud detection and risk assessment.

Information flows through a neural network in two ways: When it’s learning (being trained) networks have potential applications in many industrial areas such as advanced robotics, operations research, and process engineering. operating normally (after being trained), patterns of information are fed into the network via the input units, which trigger the layers of hidden units, and these in turn arrive at the output units.

How do Artificial Neural Networks Work?

Artificial Neural Networks are made up of a number of different layers. Each layer houses artificial neurons called units. These artificial neurons allow the layers to process, categorize, and sort information. Alongside the layers are processing nodes. Each node has its own specific piece of knowledge which includes the rules that the system was originally programmed with. It also includes any rules the system has learned for itself. This makeup allows the network to learn and react to both structured and unstructured information and data sets. Almost all artificial neural networks are fully connected throughout these layers. Each connection is weighted. The heavier the weight, or higher the number, the greater the influence that the unit has on another unit.

The first layer is the input layer. This takes on the information in various forms then progresses through the hidden layers where it is analysed and processed. In this way, the network learns more and more about the information. Eventually, the data reaches the end of the network, the output layer.

Educating Artificial Neural Networks

For artificial neural networks to learn they require a mass of information. This information is known as a training set.

For example: If you want to teach your ANN to learn how to recognise a cat your training set would consist of thousands of images of a cat. These images would all be tagged “cat”. Once this information has been inputted and analysed the network is considered trained. it will try to classify any future data based on what it thinks it is seeing. So if you present it with a new image of a cat, it will identify the creature.

As a check, during training period, the system’s output is matched against the description of data. If the information is the same, the learning process is validated. If the information is different backpropagation is used to adjust the learning process.

Backpropagation involves working back through the layers, adjusting the set mathematical equations and parameters.The adjustments are made until the output data presents the desired result. This process, deep learning, is what makes the network adaptive. The network is able to learn and adapt as more information is processed.

Neural Networks In Domain Industries

Artificial neural networks have become an accepted information analysis technology in a variety of disciplines. This has resulted in a variety of commercial applications (in both products and services) of neural network technology.

Below are domains of commercial applications of neural network technology.

- Business 0 Marketing Real Estate

- Document & Form Processing

- Finance Industry

- Market trading

- Fraud detection

- Food lndustry

- Energy Industry

- Manufacturing Process control

- Medical & Health Care Industry

- Science & Engineering

- Transportation & Communication

Neural Networks Use Cases :

In The Field of Aerospace and Satellites

· Deep learning Meets Space

Spacecraft (vehicles designed for operation outside the earth’s atmosphere) and satellites (objects that orbit a natural body) have two types of systems:

- payload

- operations systems

· Analysis of Payload Data

Weather & atmospheric monitoring — Cloud detection and estimating precipitation and use Faster-RCNNs to achieve remarkable results in estimating tropical storm intensity.

Vegetation and ground cover classification — Hyperspectral data is used to identify land cover with data from MODIS and LANDSAT satellites successfully used to show the diminishing wetlands.

Object detection and tracking — When pointed towards earth, HSD has been used to detect humans during natural disasters, track endangered animals, military troops and ships and monitor oil spills. When pointed at the sky, they leverage the lack of atmospheric interference to detect galactic phenomena. The James Webb Space Telescope project is one of the first to use DL in data post-processing to detect galaxy clusters

· Radiation Hardening In space

Devices are no longer protected from Sun’s radiation by the Earth’s atmosphere, which can cause spurious errors or stuck transistors in the device’s circuitry. Radiation damages the hardware either through its cumulative effects or through single event effects. Recoverable SEEs are called single event upsets and can affect the logic state of memory. Radiation hardening allows a compute component to withstand such errors. Rad-hard components are twice as slow and many times as expensive as their regular counterparts.

· Space Hardware and Software

Computational resources in space have traditionally been highly specialised, tightly-integrated monoliths. In contrast with terrestrial hardware systems, the harsh and remote environment of space requires compute systems (incl. the processor and memory chips) to be simultaneously efficient, radiation-resistant, and fault-tolerant. In addition systems sent into space have to be thoroughly verified. As a result, space systems, especially hardware, lag considerably behind modern compute.

· Efficient Satellite Imaging

Modern sensors can capture very high resolution images at up to 31cm of ground per pixel (in panchromatic mode) . Even higher resolution images can be obtained using synthetic-aperture radar, which uses the motion of radio antenna over the surface to map the surface in three dimensions at a resolution of just a few centimetres per pixel.

Captured data needs to be transmitted to the ground station for aggregation and analysis, which can be expensive. A satellite can reduce the amount of data transmitted by employing deep learning: on-board pre-processing can discard parts of the image of no interest.

Global annual cloud coverage is estimated to be at 66%, so excluding cloud images would drastically reduce the amount of data transmitted. For satellites deployed for a particular purpose, such as boat or whale detection, neural networks can also be used to facilitate the primary task of a satellite and only transmit regions of interest. Transmission costs can be further reduced by employing a neural network to compress image data. While the following models offer spectacular gains, training models specifically for satellite data would yield considerably better results.

· Improved Compute Paradigms for Space

The characterisation of DL models on a spacecraft must encompass more than just accuracy. The ability of the model to perform depends not only its construction but also on an environment constrained in terms of memory, power, compute, and reliability. Thus the definition of efficiency must be expanded to include hardware and context aware characterisation. While made significantly easier, the adaption process would still need to accommodate formidable hurdles endemic to space hardware, such as the higher error rates and the increased memory latency with rad-hard components. Not only must the efficiency of DL models on a compute unit be measured along multiple axes, but it must also be characterised in the context of the overall spacecraft’s operation. This becomes increasingly important as compute components in newer spacecraft are shared between various subsystems.

Real time systems, e.g. navigation, may be sensitive to interrupts and IO bottlenecks. As powerful hardware becomes common in space, it may be possible to leverage more than a single satellite for computation. Such networks would offer not only more computational power but also greater fault tolerance.

Future developments in neural network technologies:

Mind-melding between human and artificial brains, Artificial intelligence, artificial neural networks, and deep learning will eventually play a far more active role in retraining our brains, particularly as brain-computer interfaces (BCIs) become more prevalent and widely used. Deep learning will be essential for learning to read and interpret an individual brain’s language, and it will be used to optimize a different aspect of thought — focus, analysis, introspection. Eventually, this may be the path to IA (intelligence augmentation), a form of blended intelligence we’ll see around the middle of this century.

Thank You for reading!!

--

--