Now we will address the real problem. How can we optimise the hunt so that we may catch the maximum number of quasars from their colour-colour diagrams without knowing their redshifts? The truth is that it requires lot of expertise. Expertise is something that we gain through experience and more the exposure we have, more accurate would be our judgement. Can we obtain expertise in a short time? Given the slow speed of our processing system (brain) and its subjective (non-quantitative) reasoning functionality, a human takes a long time to learn and gain expertise.
What about machines? If we can teach them to learn, they are very consistent in working for days or years together. Moreover, a human expert can always judge and correct the machine without hurting its ego. The technique of making machines learn is called machine learning and the expertise that machines derive from such learning is called artificial intelligence.
Among many different machine learning tools, one popular group is called Neural Networks. Neural networks are named after the neurons that gives intelligence to human brain. They are made up of interconnected learning elements just like the neurons in our brain. The connection links between these learning elements are called weights and are identical to the synaptic strengths in biological neurons. While the human brain learns by adjusting the synaptic strengths between the neurons, the artificial neural networks (also called ANN) adjusts the weights between the connections to learn.
There are many different neural networks. Some can learn on their own and are called self learning networks or self organising maps (SOM) while others require a teacher to train them and are called supervised neural networks. Each of them have their own advantages and disadvantages. It is like the difference between self learning and formal learning. When the goals are defined apriori, a supervised neural network learning procedure is considered to be a better option. This is because only supervised networks allows one to define what is to be searched for - which is the target. In our example, it is the quasar. For the same reason, for optimisation process, we would need a supervised network.
Supervised neural networks are also of different types. The most popular of them is called the backpropagation network. They start with random initial state and keep on adjusting the weights until the output predicted by the network corresponds to the outputs specified by the targets. Although it is a very powerful technique, due to the random initial states, the network might take an unpredicatable span of time to converge (learn). Also, the accuracy in each learning cycle may be different depending on the initital state of the system. Since there is no formal method to set the initial state, the only option is to run as many training cycles as possible and then pick up the best among them as the trained network.
This problem with backpropagation was recently overcome by a modified version of networks that use Bayesian probability estimates to set the intitial state of the network. One such network that is useful for candidate selection (classification problem) is called the Difference Boosting Neural Network (DBNN). What DBNN does is that it computes the posterior probability for an object to belong to a particular group and then updates this belief by increasing the connection weights until the classification by the network matches that of the target set. It is called difference boosting because, the weights are updated so that the difference between seemingly similar objects belonging to different classes are enhanced.