Say it first

brain analogy understanding will be marked as [brain]

Subpages in order

(read these later)

Backpropagation: to compute gradient

Activation Functions, Weight Initialization & Normalization

Convolutional Neural Network(CNN)

**Recurrent Neural Networks(**RNN)

Reference

CS231n Convolutional Neural Networks for Visual Recognition

Artificial Neural Network

(universal function approximator)

Epoch: a cycle through whole training data

Untitled

Perceptron

Untitled

linear model

linear model

$$ O=f(\theta+ \sum_i w_ix_i), f(x)=[x>0] \\ \Delta w_i=\eta(T-O)x_i, \Delta \theta=\eta(T-O) $$

from sklearn.linear_model import Perceptron
inputs = np.array([[0,0], [0,1], [1,0], [1,1]]**, dtype=float**)
outputs = np.array([0, 0, 0, 1]**, dtype=float**) # label
initial_weights = { 0:0.1, 1:0.5 } # Initial weights (dictionary)
model = Perceptron(class_weight=initial_weights, eta0=0.2)
model.fit(inputs, outputs) # default stop criterion=minimum accuracy
predicted_outputs = model.predict([[0,0], [1,0], [1,1], [0,1]])
print(predicted_outputs)
print(model.coef_) # Print the final weights
print(model.intercept_) # Print the bias

Bias