先介紹一下python3下 import numpy的用法:
```
import numpy as np
A=np.array([[1,0,0],[0,1,0],[1,1,0] ])
b=np.array([2,2,1])
print(np.dot(A.T,b))
C=np.array([[1,0,0],[0,1,0],[2,5,9] ])
print(np.dot(C.T,b) )
A=np.array([[1,0,0],[0,1,0],[20,50,90]])
print(np.dot(A.T,b))
```
上面代碼的結果是:
```
[3 3 0]
[4 7 9]
[22 52 90]
```
# 從零實現一個神經網絡
```
import numpy as nump0y001
def sigmoid(x):
# Our activation function: f(x) = 1 / (1 + e^(-x))
return 1 / (1 + nump0y001.exp(-x))
class Neuron:
def __init__(self, weights, bias):
self.weights = weights
self.bias = bias
def feedforward(self, inputs):
# Weight inputs, add bias, then use the activation function
total = nump0y001.dot(self.weights, inputs) + self.bias
return sigmoid(total)
weights = nump0y001.array([0, 1]) # w1 = 0, w2 = 1
bias = 4 # b = 4
n02 = Neuron(weights, bias)
x = nump0y001.array([2, 3]) # x1 = 2, x2 = 3
print(n02.feedforward(x)) # 0.9990889488055994
```
- BP神經網絡到c++實現等--機器學習“掐死教程”
- 訓練bp(神經)網絡學會“乘法”--用”蚊子“訓練高射炮
- Ann計算異或&前饋神經網絡20200302
- 神經網絡ANN的表示20200312
- 簡單神經網絡的后向傳播(Backpropagration, BP)算法
- 牛頓迭代法求局部最優(解)20200310
- ubuntu安裝numpy和pip3等
- 從零實現一個神經網絡-numpy篇01
- _美國普林斯頓大學VictorZhou神經網絡神文的改進和翻譯20200311
- c語言-普林斯頓victorZhou神經網絡實現210301
- bp網絡實現xor異或的C語言實現202102
- bp網絡實現xor異或-自動錄入輸入(寫死20210202
- Mnist在python3.6上跑tensorFlow2.0一步一坑20210210
- numpy手寫數字識別-直接用bp網絡識別210201