Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has ever tried to contact us
(Calvin & Hobbes, ,,Weirdos From Another Planet“)
Machine learning and Artificial Intelligence is a absolutely fascinating field. The algorithms applied in this field are extremely sophisticated. They usually compute just some numbers that are hard to be interpreted by human brains and with these numbers they yield the correct result if they are implemented correct. That makes it very important that we understand the mathematics behind those algorithms fully before we can implement them. That’s usually quite a challenge
- Linear regression iterative To see how Gradient Descend works
- Gradient Descend Approximation of a function with more than one input variable
- Logistic regression with the sigmoid function Logistic regression using mean square deviation as cost function.
- Newton optimisation Newton optimisation to find a minimum or maximum of a function with more than one input variable
- Logistic regression with maximum log likelihood Logistic regression using maximum log likelihood as cost function.
- Mini batch Gradient Descend Mini batch gradient descend and momentum to process big data sets
- BackPropagation with mean square deviation Backpropagation using mean square deviation to train a neural net with the Iris data set
- BackPropagation using max. log likelihood Backpropagation using max. log likelihood to train a neural net with the Iris data set
- Backpropagation with soft max activation Backpropagation using soft max function and cross entropy to train a neural net with the Iris data set
- Classifying data by a Support Vector Machine Sequential minimal optimization to classify the Iris data set
- Classifying data by a decision tree Classifying the Iris data set by a decision tree
- Pattern reconstruction by the Boltzmann machine Using a Boltzmann machine to read numbers on a picture
- Principal components analysis in Gradient descent Data reduction for a neural net
- Data reduction by an Autoencoder Another kind of data reduction for a neural net
- Clustering Clustering with K-means and Gaussian mixture model