# 創建序列到序列模型
由于我們使用的每個 RNN 單元也都有輸出,我們可以訓練 RNN 序列來預測其他可變長度的序列。對于這個秘籍,我們將利用這一事實創建一個英語到德語的翻譯模型。
## 做好準備
對于這個秘籍,我們將嘗試構建一個語言翻譯模型,以便從英語翻譯成德語。
TensorFlow 具有用于序列到序列訓練的內置模型類。我們將說明如何在下載的英語 - 德語句子上訓練和使用它。我們將使用的數據來自 [http://www.manythings.org/](http://www.manythings.org/) 的編譯 zip 文件,該文件匯編了 Tatoeba 項目的數據( [http://tatoeba.org/home](http://tatoeba.org/home) )。這些數據是制表符分隔的英語 - 德語句子翻譯;例如,一行可能包含句子`hello. /t hallo`。該數據包含數千種不同長度的句子。
此部分的代碼已升級為使用官方 TensorFlow 倉庫提供的神經機器翻譯模型: [https://github.com/tensorflow/nmt](https://github.com/tensorflow/nmt) 。
該項目將向您展示如何下載數據,使用,修改和添加到超參數,以及配置您自己的數據以使用項目文件。
雖然官方教程向您展示了如何通過命令行執行此操作,但本教程將向您展示如何使用提供的內部代碼從頭開始訓練您自己的模型。
## 操作步驟
1. 我們首先加載必要的庫:
```py
import os
import re
import sys
import json
import math
import time
import string
import requests
import io
import numpy as np
import collections
import random
import pickle
import string
import matplotlib.pyplot as plt
import tensorflow as tf
from zipfile import ZipFile
from collections import Counter
from tensorflow.python.ops import lookup_ops
from tensorflow.python.framework import ops
ops.reset_default_graph()
local_repository = 'temp/seq2seq'
```
1. 以下代碼塊將整個 NMT 模型倉庫導入`temp`文件夾:
```py
if not os.path.exists(local_repository):
from git import Repo
tf_model_repository = 'https://github.com/tensorflow/nmt/'
Repo.clone_from(tf_model_repository, local_repository)
sys.path.insert(0, 'temp/seq2seq/nmt/')
# May also try to use 'attention model' by importing the attention model:
# from temp.seq2seq.nmt import attention_model as attention_model
from temp.seq2seq.nmt import model as model
from temp.seq2seq.nmt.utils import vocab_utils as vocab_utils
import temp.seq2seq.nmt.model_helper as model_helper
import temp.seq2seq.nmt.utils.iterator_utils as iterator_utils
import temp.seq2seq.nmt.utils.misc_utils as utils
import temp.seq2seq.nmt.train as train
```
1. 接下來,我們設置一些關于詞匯量大小,我們將刪除的標點符號以及數據存儲位置的參數:
```py
# Model Parameters
vocab_size = 10000
punct = string.punctuation
# Data Parameters
data_dir = 'temp'
data_file = 'eng_ger.txt'
model_path = 'seq2seq_model'
full_model_dir = os.path.join(data_dir, model_path)
```
1. 我們將使用 TensorFlow 提供的超參數格式。這種類型的參數存儲(在外部`json`或`xml`文件中)允許我們以編程方式迭代不同類型的體系結構(在不同的文件中)。對于本演示,我們將使用提供給我們的`wmt16.json`并進行一些更改:
```py
# Load hyper-parameters for translation model. (Good defaults are provided in Repository).
hparams = tf.contrib.training.HParams()
param_file = 'temp/seq2seq/nmt/standard_hparams/wmt16.json'
# Can also try: (For different architectures)
# 'temp/seq2seq/nmt/standard_hparams/iwslt15.json'
# 'temp/seq2seq/nmt/standard_hparams/wmt16_gnmt_4_layer.json',
# 'temp/seq2seq/nmt/standard_hparams/wmt16_gnmt_8_layer.json',
with open(param_file, "r") as f:
params_json = json.loads(f.read())
for key, value in params_json.items():
hparams.add_hparam(key, value)
hparams.add_hparam('num_gpus', 0)
hparams.add_hparam('num_encoder_layers', hparams.num_layers)
hparams.add_hparam('num_decoder_layers', hparams.num_layers)
hparams.add_hparam('num_encoder_residual_layers', 0)
hparams.add_hparam('num_decoder_residual_layers', 0)
hparams.add_hparam('init_op', 'uniform')
hparams.add_hparam('random_seed', None)
hparams.add_hparam('num_embeddings_partitions', 0)
hparams.add_hparam('warmup_steps', 0)
hparams.add_hparam('length_penalty_weight', 0)
hparams.add_hparam('sampling_temperature', 0.0)
hparams.add_hparam('num_translations_per_input', 1)
hparams.add_hparam('warmup_scheme', 't2t')
hparams.add_hparam('epoch_step', 0)
hparams.num_train_steps = 5000
# Not use any pretrained embeddings
hparams.add_hparam('src_embed_file', '')
hparams.add_hparam('tgt_embed_file', '')
hparams.add_hparam('num_keep_ckpts', 5)
hparams.add_hparam('avg_ckpts', False)
# Remove attention
hparams.attention = None
```
1. 如果模型和數據目錄尚不存在,請創建它們:
```py
# Make Model Directory
if not os.path.exists(full_model_dir):
os.makedirs(full_model_dir)
# Make data directory
if not os.path.exists(data_dir):
os.makedirs(data_dir)
```
1. 現在我們刪除標點符號并將翻譯數據拆分為英語和德語句子的單詞列表:
```py
print('Loading English-German Data')
# Check for data, if it doesn't exist, download it and save it
if not os.path.isfile(os.path.join(data_dir, data_file)):
print('Data not found, downloading Eng-Ger sentences from www.manythings.org')
sentence_url = 'http://www.manythings.org/anki/deu-eng.zip'
r = requests.get(sentence_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('deu.txt')
# Format Data
eng_ger_data = file.decode('utf-8')
eng_ger_data = eng_ger_data.encode('ascii', errors='ignore')
eng_ger_data = eng_ger_data.decode().split('\n')
# Write to file
with open(os.path.join(data_dir, data_file), 'w') as out_conn:
for sentence in eng_ger_data:
out_conn.write(sentence + '\n')
else:
eng_ger_data = []
with open(os.path.join(data_dir, data_file), 'r') as in_conn:
for row in in_conn:
eng_ger_data.append(row[:-1])
print('Done!')
```
1. 現在我們刪除英語和德語句子的標點符號:
```py
# Remove punctuation
eng_ger_data = [''.join(char for char in sent if char not in punct) for sent in eng_ger_data]
# Split each sentence by tabs
eng_ger_data = [x.split('\t') for x in eng_ger_data if len(x) >= 1]
[english_sentence, german_sentence] = [list(x) for x in zip(*eng_ger_data)]
english_sentence = [x.lower().split() for x in english_sentence]
german_sentence = [x.lower().split() for x in german_sentence]
```
1. 為了使用 TensorFlow 中更快的數據管道函數,我們需要以適當的格式將格式化的數據寫入磁盤。翻譯模型期望的格式如下:
```py
train_prefix.source_suffix = train.en
train_prefix.target_suffix = train.de
```
后綴將決定語言(`en = English`,`de = deutsch`),前綴決定數據集的類型(訓練或測試):
```py
# We need to write them to separate text files for the text-line-dataset operations.
train_prefix = 'train'
src_suffix = 'en' # English
tgt_suffix = 'de' # Deutsch (German)
source_txt_file = train_prefix + '.' + src_suffix
hparams.add_hparam('src_file', source_txt_file)
target_txt_file = train_prefix + '.' + tgt_suffix
hparams.add_hparam('tgt_file', target_txt_file)
with open(source_txt_file, 'w') as f:
for sent in english_sentence:
f.write(' '.join(sent) + '\n')
with open(target_txt_file, 'w') as f:
for sent in german_sentence:
f.write(' '.join(sent) + '\n')
```
1. 接下來,我們需要解析一些(~100)測試句子翻譯。我們任意選擇大約 100 個句子。然后我們也將它們寫入適當的文件:
```py
# Partition some sentences off for testing files
test_prefix = 'test_sent'
hparams.add_hparam('dev_prefix', test_prefix)
hparams.add_hparam('train_prefix', train_prefix)
hparams.add_hparam('test_prefix', test_prefix)
hparams.add_hparam('src', src_suffix)
hparams.add_hparam('tgt', tgt_suffix)
num_sample = 100
total_samples = len(english_sentence)
# Get around 'num_sample's every so often in the src/tgt sentences
ix_sample = [x for x in range(total_samples) if x % (total_samples // num_sample) == 0]
test_src = [' '.join(english_sentence[x]) for x in ix_sample]
test_tgt = [' '.join(german_sentence[x]) for x in ix_sample]
# Write test sentences to file
with open(test_prefix + '.' + src_suffix, 'w') as f:
for eng_test in test_src:
f.write(eng_test + '\n')
with open(test_prefix + '.' + tgt_suffix, 'w') as f:
for ger_test in test_src:
f.write(ger_test + '\n')
```
1. 接下來,我們處理英語和德語句子的詞匯表。然后我們將詞匯表列表保存到適當的文件中:
```py
print('Processing the vocabularies.')
# Process the English Vocabulary
all_english_words = [word for sentence in english_sentence for word in sentence]
all_english_counts = Counter(all_english_words)
eng_word_keys = [x[0] for x in all_english_counts.most_common(vocab_size-3)] # -3 because UNK, S, /S is also in there
eng_vocab2ix = dict(zip(eng_word_keys, range(1, vocab_size)))
eng_ix2vocab = {val: key for key, val in eng_vocab2ix.items()}
english_processed = []
for sent in english_sentence:
temp_sentence = []
for word in sent:
try:
temp_sentence.append(eng_vocab2ix[word])
except KeyError:
temp_sentence.append(0)
english_processed.append(temp_sentence)
# Process the German Vocabulary
all_german_words = [word for sentence in german_sentence for word in sentence]
all_german_counts = Counter(all_german_words)
ger_word_keys = [x[0] for x in all_german_counts.most_common(vocab_size-3)]
# -3 because UNK, S, /S is also in there
ger_vocab2ix = dict(zip(ger_word_keys, range(1, vocab_size)))
ger_ix2vocab = {val: key for key, val in ger_vocab2ix.items()}
german_processed = []
for sent in german_sentence:
temp_sentence = []
for word in sent:
try:
temp_sentence.append(ger_vocab2ix[word])
except KeyError:
temp_sentence.append(0)
german_processed.append(temp_sentence)
# Save vocab files for data processing
source_vocab_file = 'vocab' + '.' + src_suffix
hparams.add_hparam('src_vocab_file', source_vocab_file)
eng_word_keys = ['<unk>', '<s>', '</s>'] + eng_word_keys
target_vocab_file = 'vocab' + '.' + tgt_suffix
hparams.add_hparam('tgt_vocab_file', target_vocab_file)
ger_word_keys = ['<unk>', '<s>', '</s>'] + ger_word_keys
# Write out all unique english words
with open(source_vocab_file, 'w') as f:
for eng_word in eng_word_keys:
f.write(eng_word + '\n')
# Write out all unique german words
with open(target_vocab_file, 'w') as f:
for ger_word in ger_word_keys:
f.write(ger_word + '\n')
# Add vocab size to hyper parameters
hparams.add_hparam('src_vocab_size', vocab_size)
hparams.add_hparam('tgt_vocab_size', vocab_size)
# Add out-directory
out_dir = 'temp/seq2seq/nmt_out'
hparams.add_hparam('out_dir', out_dir)
if not tf.gfile.Exists(out_dir):
tf.gfile.MakeDirs(out_dir)
```
1. 接下來,我們將分別創建訓練,推斷和評估圖。首先,我們創建訓練圖。我們用一個類來做這個并將參數設為`namedtuple`。此代碼來自 NMT TensorFlow 倉庫。有關更多信息,請參閱名為`model_helper.py`的倉庫中的文件:
```py
class TrainGraph(collections.namedtuple("TrainGraph", ("graph", "model", "iterator", "skip_count_placeholder"))):
pass
def create_train_graph(scope=None):
graph = tf.Graph()
with graph.as_default():
src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(hparams.src_vocab_file, hparams.tgt_vocab_file,share_vocab=False)
src_dataset = tf.data.TextLineDataset(hparams.src_file)
tgt_dataset = tf.data.TextLineDataset(hparams.tgt_file)
skip_count_placeholder = tf.placeholder(shape=(), dtype=tf.int64)
iterator = iterator_utils.get_iterator(src_dataset, tgt_dataset, src_vocab_table, tgt_vocab_table, batch_size=hparams.batch_size, sos=hparams.sos, eos=hparams.eos, random_seed=None, num_buckets=hparams.num_buckets, src_max_len=hparams.src_max_len, tgt_max_len=hparams.tgt_max_len, skip_count=skip_count_placeholder)
final_model = model.Model(hparams, iterator=iterator, mode=tf.contrib.learn.ModeKeys.TRAIN, source_vocab_table=src_vocab_table, target_vocab_table=tgt_vocab_table, scope=scope)
return TrainGraph(graph=graph, model=final_model, iterator=iterator, skip_count_placeholder=skip_count_placeholder)
train_graph = create_train_graph()
```
1. 我們現在創建評估圖:
```py
# Create the evaluation graph
class EvalGraph(collections.namedtuple("EvalGraph", ("graph", "model", "src_file_placeholder", "tgt_file_placeholder","iterator"))):
pass
def create_eval_graph(scope=None):
graph = tf.Graph()
with graph.as_default():
src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(
hparams.src_vocab_file, hparams.tgt_vocab_file, hparams.share_vocab)
src_file_placeholder = tf.placeholder(shape=(), dtype=tf.string)
tgt_file_placeholder = tf.placeholder(shape=(), dtype=tf.string)
src_dataset = tf.data.TextLineDataset(src_file_placeholder)
tgt_dataset = tf.data.TextLineDataset(tgt_file_placeholder)
iterator = iterator_utils.get_iterator(
src_dataset,
tgt_dataset,
src_vocab_table,
tgt_vocab_table,
hparams.batch_size,
sos=hparams.sos,
eos=hparams.eos,
random_seed=hparams.random_seed,
num_buckets=hparams.num_buckets,
src_max_len=hparams.src_max_len_infer,
tgt_max_len=hparams.tgt_max_len_infer)
final_model = model.Model(hparams,
iterator=iterator,
mode=tf.contrib.learn.ModeKeys.EVAL,
source_vocab_table=src_vocab_table,
target_vocab_table=tgt_vocab_table,
scope=scope)
return EvalGraph(graph=graph,
model=final_model,
src_file_placeholder=src_file_placeholder,
tgt_file_placeholder=tgt_file_placeholder,
iterator=iterator)
eval_graph = create_eval_graph()
```
1. 現在我們對推理圖做同樣的事情:
```py
# Inference graph
class InferGraph(collections.namedtuple("InferGraph", ("graph","model","src_placeholder", "batch_size_placeholder","iterator"))):
pass
def create_infer_graph(scope=None):
graph = tf.Graph()
with graph.as_default():
src_vocab_table, tgt_vocab_table = vocab_utils.create_vocab_tables(hparams.src_vocab_file,hparams.tgt_vocab_file, hparams.share_vocab)
reverse_tgt_vocab_table = lookup_ops.index_to_string_table_from_file(hparams.tgt_vocab_file, default_value=vocab_utils.UNK)
src_placeholder = tf.placeholder(shape=[None], dtype=tf.string)
batch_size_placeholder = tf.placeholder(shape=[], dtype=tf.int64)
src_dataset = tf.data.Dataset.from_tensor_slices(src_placeholder)
iterator = iterator_utils.get_infer_iterator(src_dataset,
src_vocab_table,
batch_size=batch_size_placeholder,
eos=hparams.eos,
src_max_len=hparams.src_max_len_infer)
final_model = model.Model(hparams,
iterator=iterator,
mode=tf.contrib.learn.ModeKeys.INFER,
source_vocab_table=src_vocab_table,
target_vocab_table=tgt_vocab_table,
reverse_target_vocab_table=reverse_tgt_vocab_table,
scope=scope)
return InferGraph(graph=graph,
model=final_model,
src_placeholder=src_placeholder,
batch_size_placeholder=batch_size_placeholder,
iterator=iterator)
infer_graph = create_infer_graph()
```
1. 為了在訓練期間提供更多說明性輸出,我們提供了在訓練迭代期間輸出的任意源/目標翻譯的簡短列表:
```py
# Create sample data for evaluation
sample_ix = [25, 125, 240, 450]
sample_src_data = [' '.join(english_sentence[x]) for x in sample_ix]
sample_tgt_data = [' '.join(german_sentence[x]) for x in sample_ix]
print([x for x in zip(sample_src_data, sample_tgt_data)])
```
1. 接下來,我們加載訓練圖:
```py
config_proto = utils.get_config_proto()
train_sess = tf.Session(config=config_proto, graph=train_graph.graph)
eval_sess = tf.Session(config=config_proto, graph=eval_graph.graph)
infer_sess = tf.Session(config=config_proto, graph=infer_graph.graph)
# Load the training graph
with train_graph.graph.as_default():
loaded_train_model, global_step = model_helper.create_or_load_model(train_graph.model,
hparams.out_dir,
train_sess,
"train")
summary_writer = tf.summary.FileWriter(os.path.join(hparams.out_dir, 'Training'), train_graph.graph)
```
1. 現在我們將評估操作添加到圖中:
```py
for metric in hparams.metrics:
hparams.add_hparam("best_" + metric, 0)
best_metric_dir = os.path.join(hparams.out_dir, "best_" + metric)
hparams.add_hparam("best_" + metric + "_dir", best_metric_dir)
tf.gfile.MakeDirs(best_metric_dir)
eval_output = train.run_full_eval(hparams.out_dir, infer_graph, infer_sess, eval_graph, eval_sess, hparams, summary_writer, sample_src_data, sample_tgt_data)
eval_results, _, acc_blue_scores = eval_output
```
1. 現在我們創建初始化操作并初始化圖;我們還初始化了一些將更新每次迭代的參數(時間,全局步驟和周期步驟):
```py
# Training Initialization
last_stats_step = global_step
last_eval_step = global_step
last_external_eval_step = global_step
steps_per_eval = 10 * hparams.steps_per_stats
steps_per_external_eval = 5 * steps_per_eval
avg_step_time = 0.0
step_time, checkpoint_loss, checkpoint_predict_count = 0.0, 0.0, 0.0
checkpoint_total_count = 0.0
speed, train_ppl = 0.0, 0.0
utils.print_out("# Start step %d, lr %g, %s" %
(global_step, loaded_train_model.learning_rate.eval(session=train_sess),
time.ctime()))
skip_count = hparams.batch_size * hparams.epoch_step
utils.print_out("# Init train iterator, skipping %d elements" % skip_count)
train_sess.run(train_graph.iterator.initializer,
feed_dict={train_graph.skip_count_placeholder: skip_count})
```
> 請注意,默認情況下,訓練將每 1,000 次迭代保存模型。如果需要,您可以在超參數中更改此設置。目前,訓練此模型并保存最新的五個模型占用大約 2 GB 的硬盤空間。
1. 以下代碼將開始模型的訓練和評估。訓練的重要部分是在循環的最開始(前三分之一)。其余代碼專門用于評估,從樣本推斷和保存模型,如下所示:
```py
# Run training
while global_step < hparams.num_train_steps:
start_time = time.time()
try:
step_result = loaded_train_model.train(train_sess)
(_, step_loss, step_predict_count, step_summary, global_step, step_word_count,
batch_size, __, ___) = step_result
hparams.epoch_step += 1
except tf.errors.OutOfRangeError:
# Next Epoch
hparams.epoch_step = 0
utils.print_out("# Finished an epoch, step %d. Perform external evaluation" % global_step)
train.run_sample_decode(infer_graph,
infer_sess,
hparams.out_dir,
hparams,
summary_writer,
sample_src_data,
sample_tgt_data)
dev_scores, test_scores, _ = train.run_external_eval(infer_graph,
infer_sess,
hparams.out_dir,
hparams,
summary_writer)
train_sess.run(train_graph.iterator.initializer, feed_dict={train_graph.skip_count_placeholder: 0})
continue
summary_writer.add_summary(step_summary, global_step)
# Statistics
step_time += (time.time() - start_time)
checkpoint_loss += (step_loss * batch_size)
checkpoint_predict_count += step_predict_count
checkpoint_total_count += float(step_word_count)
# print statistics
if global_step - last_stats_step >= hparams.steps_per_stats:
last_stats_step = global_step
avg_step_time = step_time / hparams.steps_per_stats
train_ppl = utils.safe_exp(checkpoint_loss / checkpoint_predict_count)
speed = checkpoint_total_count / (1000 * step_time)
utils.print_out(" global step %d lr %g "
"step-time %.2fs wps %.2fK ppl %.2f %s" %
(global_step,
loaded_train_model.learning_rate.eval(session=train_sess),
avg_step_time, speed, train_ppl, train._get_best_results(hparams)))
if math.isnan(train_ppl):
break
# Reset timer and loss.
step_time, checkpoint_loss, checkpoint_predict_count = 0.0, 0.0, 0.0
checkpoint_total_count = 0.0
if global_step - last_eval_step >= steps_per_eval:
last_eval_step = global_step
utils.print_out("# Save eval, global step %d" % global_step)
utils.add_summary(summary_writer, global_step, "train_ppl", train_ppl)
# Save checkpoint
loaded_train_model.saver.save(train_sess, os.path.join(hparams.out_dir, "translate.ckpt"), global_step=global_step)
# Evaluate on dev/test
train.run_sample_decode(infer_graph,
infer_sess,
out_dir,
hparams,
summary_writer,
sample_src_data,
sample_tgt_data)
dev_ppl, test_ppl = train.run_internal_eval(eval_graph,
eval_sess,
out_dir,
hparams,
summary_writer)
if global_step - last_external_eval_step >= steps_per_external_eval:
last_external_eval_step = global_step
# Save checkpoint
loaded_train_model.saver.save(train_sess, os.path.join(hparams.out_dir, "translate.ckpt"), global_step=global_step)
train.run_sample_decode(infer_graph,
infer_sess,
out_dir,
hparams,
summary_writer,
sample_src_data,
sample_tgt_data)
dev_scores, test_scores, _ = train.run_external_eval(infer_graph,
infer_sess,
out_dir,
hparams,
summary_writer)
```
## 工作原理
對于這個秘籍,我們使用 TensorFlow 內置的序列到序列模型從英語翻譯成德語。
由于我們沒有為我們的測試句子提供完美的翻譯,因此還有改進的余地。如果我們訓練時間更長,并且可能組合一些桶(每個桶中有更多的訓練數據),我們可能能夠改進我們的翻譯。
## 更多
在 Many Things 網站上托管了其他類似的雙語句子數據集( [http://www.manythings.org/anki/](http://www.manythings.org/anki/) )。您可以隨意替換任何吸引您的語言數據集。
- TensorFlow 入門
- 介紹
- TensorFlow 如何工作
- 聲明變量和張量
- 使用占位符和變量
- 使用矩陣
- 聲明操作符
- 實現激活函數
- 使用數據源
- 其他資源
- TensorFlow 的方式
- 介紹
- 計算圖中的操作
- 對嵌套操作分層
- 使用多個層
- 實現損失函數
- 實現反向傳播
- 使用批量和隨機訓練
- 把所有東西結合在一起
- 評估模型
- 線性回歸
- 介紹
- 使用矩陣逆方法
- 實現分解方法
- 學習 TensorFlow 線性回歸方法
- 理解線性回歸中的損失函數
- 實現 deming 回歸
- 實現套索和嶺回歸
- 實現彈性網絡回歸
- 實現邏輯回歸
- 支持向量機
- 介紹
- 使用線性 SVM
- 簡化為線性回歸
- 在 TensorFlow 中使用內核
- 實現非線性 SVM
- 實現多類 SVM
- 最近鄰方法
- 介紹
- 使用最近鄰
- 使用基于文本的距離
- 使用混合距離函數的計算
- 使用地址匹配的示例
- 使用最近鄰進行圖像識別
- 神經網絡
- 介紹
- 實現操作門
- 使用門和激活函數
- 實現單層神經網絡
- 實現不同的層
- 使用多層神經網絡
- 改進線性模型的預測
- 學習玩井字棋
- 自然語言處理
- 介紹
- 使用詞袋嵌入
- 實現 TF-IDF
- 使用 Skip-Gram 嵌入
- 使用 CBOW 嵌入
- 使用 word2vec 進行預測
- 使用 doc2vec 進行情緒分析
- 卷積神經網絡
- 介紹
- 實現簡單的 CNN
- 實現先進的 CNN
- 重新訓練現有的 CNN 模型
- 應用 StyleNet 和 NeuralStyle 項目
- 實現 DeepDream
- 循環神經網絡
- 介紹
- 為垃圾郵件預測實現 RNN
- 實現 LSTM 模型
- 堆疊多個 LSTM 層
- 創建序列到序列模型
- 訓練 Siamese RNN 相似性度量
- 將 TensorFlow 投入生產
- 介紹
- 實現單元測試
- 使用多個執行程序
- 并行化 TensorFlow
- 將 TensorFlow 投入生產
- 生產環境 TensorFlow 的一個例子
- 使用 TensorFlow 服務
- 更多 TensorFlow
- 介紹
- 可視化 TensorBoard 中的圖
- 使用遺傳算法
- 使用 k 均值聚類
- 求解常微分方程組
- 使用隨機森林
- 使用 TensorFlow 和 Keras