本文共 23277 字,大约阅读时间需要 77 分钟。
书上的内容可,一些关于此的博客,
实验数据来自Github上中文爱好者收集的,作者在此基础上进行了一些数据处理,由于数据处理很耗时间,且不是pytorch学习的重点,这里省略。作者提供了一个numpy的压缩包tang.npz,下载地址
import numpy as np# 加载数据datas = np.load('...your path/tang.npz', allow_pickle=True)data = datas['data'] # numpy.ndarrayprint(data)print(np.shape(data))
[[8292 8292 8292 ... 846 7435 8290] [8292 8292 8292 ... 7878 7435 8290] [8292 8292 8292 ... 4426 7435 8290] ... [8292 8292 8292 ... 7739 7435 8290] [8292 8292 8292 ... 7290 7435 8290] [8292 8292 8292 ... 1294 7435 8290]] (57580, 125)
data
是一个57580 * 125的numpy数组,即总共有57580首诗歌,每首诗歌长度为125个字符(不足125补空格,超过125的丢弃)
ix2word = datas['ix2word']print(ix2word)
{ 0: '憁', 1: '耀', 2: '枅', 3: '涉', 4: '谈',...,, 8290: '', 8291: ' ', 8292: ''}
ix2word = datas['ix2word']ix2word2 = datas['ix2word'].item()print(ix2word == ix2word2)
True
可以发现字典有没有.item()
都是一样的,但是发现去掉回使用报错IndexError: too many indices for array
ix2word = datas['ix2word']print(type(ix2word))ix2word2 = datas['ix2word'].item()print(type(ix2word2))
这两个实际上是不一样的,更直观的例子:
x = np.array({ 8290: '', 8291: ' ', 8292: ''})print(x)print(x.item())print(type(x))print(type(x.item()))print(x[0])
{ 8290: '', 8291: ' ', 8292: ''}{ 8290: ' ', 8291: ' ', 8292: ''} IndexError: too many indices for array
关于numpy中.item用法:
x = np.random.randint(9, size=(3, 3)) x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) x.item(3) 1 x.item(7) 0 x.item((0, 1)) 2 x.item((2, 2)) 1
所以推测这里字典的储存还是以numpy数组格式储存的,所以需要使用.item()
把字典取出来
ix2word = datas['ix2word'].item()# 查看第一首诗歌poem = data[0]print(poem)print(len(poem))# 词序号转成对应的汉字poem_txt = [ix2word[i] for i in poem]print(''.join(poem_txt))
[8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8292 8291 6731 4770 1787 8118 7577 7066 4817 648 7121 1542 6483 7435 7686 2889 1671 5862 1949 7066 2596 4785 3629 1379 2703 7435 6064 6041 4666 4038 4881 7066 4747 1534 70 3788 3823 7435 4907 5567 201 2834 1519 7066 782 782 2063 2031 846 7435 8290]125度门能不访,冒雪屡西东。已想人如玉,遥怜马似骢。乍迷金谷路,稍变上阳宫。还比相思意,纷纷正满空。
可以发现’。‘对应着’7435’,’,‘对应着’7435’
同样的,可以由诗歌转化为数字:
word2ix = datas['word2ix'].item()# 汉字转成对应的词序号poem_txt = '度门能不访,冒雪屡西东。'poem = [word2ix[i] for i in poem_txt]print(poem)
[6731, 4770, 1787, 8118, 7577, 7066, 4817, 648, 7121, 1542, 6483, 7435]
关于np.load()
函数中的这个参数allow_pickle=True
,功能是:布尔值,允许使用Python pickles保存对象数组,去掉的话会报错:ValueError: Object arrays cannot be loaded when allow_pickle=False
,所以就加上
import numpy as np # tang.npz的压缩格式处理import os # 打开文件import torchdef get_data(): if os.path.exists(data_path): datas = np.load(data_path, allow_pickle=True) # 加载数据 data = datas['data'] # numpy.ndarray word2ix = datas['word2ix'].item() # dic ix2word = datas['ix2word'].item() # dic return data, word2ix, ix2wordif __name__ == '__main__': data_path = '...your path/tang.npz' data, word2ix, ix2word = get_data() data = torch.from_numpy(data) dataloader = torch.utils.data.DataLoader(data, batch_size=128, shuffle=True, num_workers=1) # shuffle=True随机打乱
这里没有将data实现为一个DataSet对象,但是它还是可以利用DataLoader进行多线程加载。这是因为Data作为一个Tensor对象,已经实现了__getitm__和__len__方法。getitm[0]等价于data[0],len(data)返回data.size(0),这种运行方式称为鸭子类型(Duck Typing),是一种动态类型的风格
再看一遍李宏毅老师2020机器学习,关于LSTM在pytorch中参数和用法可看以及
对输入序列的每个元素,LSTM的每层都会执行以下计算: i t = s i g m o i d ( W i i x t + b i i + W h i h t − 1 + b h i ) f t = s i g m o i d ( W i f x t + b i f + W h f h t − 1 + b h f ) o t = s i g m o i d ( W i o x t + b i o + W h o h t − 1 + b h o ) g t = t a n h ( W i g x t + b i g + W h g h t − 1 + b h g ) c t = f t c t − 1 + i t g t h t = o t ∗ t a n h ( c t ) \begin{aligned} i_t &= sigmoid(W_{ii}x_t+b_{ii}+W_{hi}h_{t-1}+b_{hi}) \ f_t &= sigmoid(W_{if}x_t+b_{if}+W_{hf}h_{t-1}+b_{hf}) \ o_t &= sigmoid(W_{io}x_t+b_{io}+W_{ho}h_{t-1}+b_{ho})\ g_t &= tanh(W_{ig}x_t+b_{ig}+W_{hg}h_{t-1}+b_{hg})\ c_t &= f_tc_{t-1}+i_tg_t\ h_t &= o_t*tanh(c_t) \end{aligned} it=sigmoid(Wiixt+bii+Whiht−1+bhi) ft=sigmoid(Wifxt+bif+Whfht−1+bhf) ot=sigmoid(Wioxt+bio+Whoht−1+bho) gt=tanh(Wigxt+big+Whght−1+bhg) ct=ftct−1+itgt ht=ot∗tanh(ct) h t h_t ht是时刻 t t t的隐状态, c t c_t ct是时刻 t t t的细胞状态, x t x_t xt是上一层的在时刻 t t t的隐状态或者是第一层在时刻 t t t的输入。 i t , f t , g t , o t i_t, f_t, g_t, o_t it,ft,gt,ot 分别代表 输入门,遗忘门,细胞和输出门
nn.LSTM参数:
输出:
假如每个词是100维的向量,每个句子含有24个单词,一次训练10个句子。那么batch_size=10,seq=24,input_size=100。(seq指的是句子的长度,input_size作为一个 x t x_{t} xt的输入)
model = nn.LSTM(100, 16, num_layers=2) # 词向量为100维,隐层个数为16个,2层x = torch.rand(10, 24, 100) # batch=10,seq_len=24,input_size=100output, (h,c) = model(x)print(output.size())print(h.size())print(c.size())
torch.Size([10, 24, 16])torch.Size([2, 24, 16])torch.Size([2, 24, 16])
加上batch_first=True
:
model = nn.LSTM(100, 16, num_layers=2, batch_first=True)x = torch.rand(10, 24, 100) # batch=10,seq_len=24,input_size=100output, (h,c) = model(x)print(output.size())print(h.size())print(c.size())
torch.Size([10, 24, 16])torch.Size([2, 10, 16])torch.Size([2, 10, 16])
model = nn.LSTM(100, 16, num_layers=3, batch_first=True, bidirectional=True)x = torch.rand(10, 24, 100)output, (h,c) = model(x)print(output.size())print(h.size())print(c.size())
torch.Size([10, 24, 32])torch.Size([6, 10, 16])torch.Size([6, 10, 16])
关于nn.LSTM 输出的h和c是什么见
torch.nn.Embedding()
将词向量和词对应起来,因为没有指定训练好的词向量, 所以embedding会生成一个随机的词向量
.new()
的作用,创建一个新的Tensor,该Tensor的type和device都和原有Tensor一致,其它可见
例如:
x = torch.Tensor([1,2,3])y = x.data[1]z = x.data.new(2,2,2).fill_(0).float() print(x)print(y)print(z)
结果
tensor([1., 2., 3.])tensor(2.)tensor([[[0., 0.], [0., 0.]], [[0., 0.], [0., 0.]]])
模型构建代码如下:
class Net(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim): super(Net, self).__init__() self.hidden_dim = hidden_dim self.embeddings = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, num_layers=2, batch_first=True) # lstm输入为:batch, seq, input_size # lstm输出为:batch * seq * 256; (2 * batch * 256,...) self.linear1 = nn.Linear(self.hidden_dim, vocab_size) def forward(self, input, hidden=None): seq_len, batch_size = input.size() if hidden is None: h_0 = input.data.new(2, batch_size, self.hidden_dim).fill_(0).float() c_0 = input.data.new(2, batch_size, self.hidden_dim).fill_(0).float() h_0, c_0 = Variable(h_0), Variable(c_0) else: h_0, c_0 = hidden embeds = self.embeddings(input) # (seq_len, batch_size, embedding_dim), (1,1,128) output, hidden = self.lstm(embeds, (h_0, c_0)) #(seq_len, batch_size, hidden_dim), (1,1,256) output = self.linear1(output.view(seq_len*batch_size, -1)) # ((seq_len * batch_size),hidden_dim), (1,256) → (1,8293) return output, hidden
torchnet
使用PyTorchNet()里的一个工具:meter,meter提供了一些轻量级的工具,用来帮助用户快速统计训练过程中的一些指标:命令行输入:
pip install torchnet
tqdm与enumerate()
enumerate()
函数是python的内置函数,可以同时遍历列表中的索引及其元素 from tqdm import tqdmlt=['a','b','c']for i,item in enumerate(lt): print(i,item) >>> # output0 a1 b2 c
pytorch 中 tensor的数据类型见
使用torch.transpose(Tensor,dim0,dim1)
对pytorch中对tensor矩阵进行转置,一次只能在两个维度间进行转置,之后使用.contiguous()返回一个内存连续的有相同数据的 tensor,具体参考
数据错位
data = torch.Tensor([[1,2,3], [4,5,6], [7,8,9], [10,11,12]])print(data[:-1,:])print(data[1:,:])
tensor([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])tensor([[ 4., 5., 6.], [ 7., 8., 9.], [10., 11., 12.]])
所以4行的数据实际上训练了三次,输入[1., 2., 3.]希望输出[ 4., 5., 6.],输入[4., 5., 6.]希望输出[ 7., 8., 9.]……
训练报错:
RuntimeError: CUDA out of memory. Tried to allocate 504.00 MiB (GPU 0; 2.00 GiB total capacity; 713.18 MiB already allocated; 439.91 MiB free; 734.00 MiB reserved in total by PyTorch)
调小batch_size = 8
loss.data[0]
报错:
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item()` in C++ to convert a 0-dim tensor to a number
pytorch版本不同,将loss.data[0] 改成loss.item()
训练四个epoch:
7198it [03:40, 32.59it/s]7198it [03:42, 32.35it/s]7198it [03:42, 32.34it/s]7198it [03:41, 32.51it/s]
训练曲线如下:
曲线有点不连续,应该是period.append(i + epoch * len(dataloader))
有点问题,因为57580/8=71.975,本来就不是连续的 把epoch改为10还是有这个问题……
Cuda is available!576it [00:21, 27.68it/s]574:床前明月光,不不不不人。1149it [00:41, 28.52it/s]1149:床前明月光,不人不不人。 1725it [01:01, 26.94it/s]1724:床前明月光,不有不不人。不有不不见,不有不不人。 2298it [01:21, 29.06it/s]2299:床前明月光,风风不可知。不知不可见,不见不知人。一人不可见,不知不可知。不知不可见,不见不知人。2873it [01:41, 28.48it/s]2874:床前明月光,山水不可知。一日不可见,天门不可知。不知不可见,不见一中心。不见天山去,山山不可知。3450it [02:01, 25.21it/s]3449:床前明月光,月色如不见。一日不可见,一人不可见。一来不可见,一日无所见。一来不可见,一日无所见。4023it [02:21, 28.36it/s]4024:床前明月光,日日无人间。一日不可见,一日不可知。我人不可见,不得不可知。我有不可见,不知此人间。4600it [02:41, 26.28it/s]4599:床前明月光,风雨无人语。一年不可见,一日不可见。一年不可知,一日不可见。一年不可知,一日不可见。5174it [03:01, 28.77it/s]5174:床前明月光,日月无人间。山中有人间,不见青云间。不知此时去,不见青山中。何人不可见,不见东南山。5750it [03:22, 25.30it/s]5749:床前明月光,一日不可见。一旦不可知,不知何所见。我有不可言,不知何所为。我来不可见,不得无人语。5758it [03:22, 28.43it/s]575it [00:21, 24.94it/s]574:床前明月光,一日一相逢。一夜一千里,一年一一年。一年一相见,一夜不可忘。一年不可见,一片不可忘。1150it [00:41, 25.67it/s]1149:床前明月光,一夜一相逢。一夜一相见,一声不可攀。一声不可见,不得不可忘。我有一时人,不如君不知。1725it [01:02, 25.14it/s]1724:床前明月光,一夜一相望。一声不可见,一夜不可见。一声不可见,一夜不可见。一声不可见,一夜不可见。2300it [01:22, 25.27it/s]2299:床前明月光,一日一百年。一朝一日日,不见青山人。一言不可见,一日不可寻。一言不可见,一日不可寻。2873it [01:42, 28.53it/s]2874:床前明月光,日暮一相思。一夜不可见,一声不可攀。君子不可见,此地不可忘。我有一时人,不知无所如。3450it [02:02, 24.97it/s]3449:床前明月光,风雨如雨雪。一夜不可见,一夜不可见。一声不可见,一夜不可见。一夜不可见,一声不可惜。4023it [02:22, 28.51it/s]4024:床前明月光,一夜一夜月。一声一声起,一夜一枝落。一声一声起,一夜一枝落。一声一声起,一夜一枝落。4599it [02:42, 29.03it/s]4599:床前明月光,风吹一枝红。一夜不可见,一声不可攀。一声不可见,一夜不可攀。一夜不可见,一声不可攀。5174it [03:02, 28.41it/s]5174:床前明月光,日月如水流。我来不可见,不知心不同。我来不可见,不觉心如何。我来不可见,不知心不穷。5750it [03:22, 25.05it/s]5749:床前明月光,风吹一枝枝。一夜不可见,一声不可攀。君不见青山,不见青山中。我有一片日,不见一片霜。5758it [03:23, 28.32it/s]575it [00:21, 25.36it/s]574:床前明月光,日月照清光。不知何所有,不觉心如何。不知何所有,不觉不可寻。我来不得意,不觉心如何。1149it [00:41, 28.45it/s]1149:床前明月光,日落青山里。一朝一朝夕,万里无所适。我有一朝人,不见天地间。我有一朝客,不见天地间。1724it [01:01, 28.46it/s]1724:床前明月光,日日无人语。一言一百年,一日不可见。一言一百年,一日不可见。一言一百年,一日不可见。2299it [01:21, 28.73it/s]2299:床前明月光,日日无人识。一身不可见,一身不可见。一身不可见,一身不可见。一身不可见,一身不可见。2875it [01:41, 25.07it/s]2874:床前明月光,日月如水色。一夜一夜来,一夜一夜起。一夜一夜月,一一一一声。一夜不可见,一夜不相思。3449it [02:01, 28.59it/s]3449:床前明月光,日月如水色。一枝不可见,一夜不可见。君不见,不见人间。我有一时,不如不得。我有一人,4023it [02:22, 28.47it/s]4024:床前明月光,日月如云起。一叶不可见,一声不可见。君不见,不见人,不见人间不可知。君不见此时有余,4600it [02:42, 25.19it/s]4599:床前明月光,日月照秋光。不知何处去,不觉春风来。君不见君子,不见君王孙。君不见君子,妾不见君王。5173it [03:02, 28.50it/s]5174:床前明月光,风雨一声清。一旦一杯酒,一杯一片云。一杯不可见,一日不相思。我有一杯酒,不如一日欢。5749it [03:22, 28.51it/s]5749:床前明月光,日月照清光。一朝一杯酒,一笑一时新。一朝不可见,一日无所求。我有一日心,不如此中情。5758it [03:22, 28.38it/s]573it [00:21, 28.74it/s]574:床前明月光,日日照中庭。一日不相见,一言不可寻。君子不可见,君子不可寻。我有一夫子,不知身不平。1151it [00:41, 25.33it/s]1149:床前明月光,风吹清泠泠。一朝一朝夕,万里无人情。一旦不可见,一言不可寻。我来不可见,不见不可攀。1723it [01:01, 28.54it/s]1724:床前明月光,日月照清晨。一朝一朝夕,万里无人心。一旦一何时,一身无所求。我来不可见,不见心中生。2301it [01:21, 25.30it/s]2299:床前明月光,日月照前楹。一叶不可见,一声无所思。一朝无一事,万里无人心。一朝不可见,一旦无所思。2873it [01:41, 28.93it/s]2874:床前明月光,日月照清光。一夜不相见,一声不可寻。君子不可见,我心不可寻。我有一相见,心如万里心。3449it [02:02, 28.63it/s]3449:床前明月光,日月照清光。我有一人心,不见一片云。我有一人心,不如一一声。我来不可见,何必有所思。4024it [02:22, 28.75it/s]4024:床前明月光,日月照前楹。一夜不相见,一声不可寻。我心不可见,此意何所为。我心不可见,此意何所为。4598it [02:42, 28.60it/s]4599:床前明月光,日月照清光。一旦一杯酒,一杯一片云。一朝一杯酒,一旦一杯酒。一旦一杯酒,一杯不可见。5174it [03:02, 28.38it/s]5174:床前明月光,日日明月明。一旦不可见,一旦不得真。一旦不得意,一言不可寻。一朝有所得,万物皆自然。5750it [03:22, 24.90it/s]5749:床前明月光,一日一日出。一朝一朝夕,一日一一日。一朝一朝夕,一日一一日。一朝一朝夕,一日一百里。5758it [03:23, 28.35it/s]573it [00:21, 28.84it/s]574:床前明月光,日月照清光。一朝一朝夕,一旦一日新。一朝一朝夕,一旦一日新。一朝一朝夕,一醉一相亲。1150it [00:41, 25.51it/s]1149:床前明月光,日月照清晨。一朝一朝暮,一旦一日行。一朝一朝去,一旦一日行。一旦不得意,一旦一相亲。1723it [01:01, 29.28it/s]1724:床前明月光,一旦一日暮。一朝一朝暮,一旦一相见。一旦东西来,一望一千里。江南一万里,日暮东西北。2298it [01:21, 28.75it/s]2299:床前明月光,日月照天地。一朝一朝暮,万里无所适。一朝不可见,万里无所适。一朝不可见,万里无所适。2874it [01:41, 28.97it/s]2874:床前明月光,一旦一日暮。一朝一朝暮,一日无一事。君子不可见,君子不可见。君子不可见,君子不可见。3450it [02:02, 25.20it/s]3449:床前明月光,日日无所欲。我有一人书,不知何所似。我有一身心,不知身外事。我有一身心,不知身外事。4023it [02:22, 28.76it/s]4024:床前明月光,日月照清晨。我来不可见,况乃无人知。我有一杯酒,我为一杯酒。我心不可忘,我心不可见。4599it [02:42, 28.47it/s]4599:床前明月光,日月照清明。夜夜闻声急,秋风入夜深。清风吹白日,白日照青天。远近连天近,遥遥落照前。5173it [03:02, 28.51it/s]5174:床前明月光,日月照清明。一夜不可见,一声不可寻。一朝不可见,万里无人心。一夜不可见,一声不可闻。5749it [03:22, 28.59it/s]5749:床前明月光,日月照清凉。清泠不可见,清净不可寻。我有一寸心,不如一日心。我心不可忘,我心不可忘。5758it [03:22, 28.38it/s]575it [00:21, 26.28it/s]574:床前明月光,日日照前楹。不知何所有,不觉心相亲。我来不得意,我亦无所闻。我来不得意,我亦无所闻。1151it [00:41, 25.40it/s]1149:床前明月光,日月照清明。一旦一朝暮,一朝同此心。一朝无一事,万里在江城。一旦东南望,千里万里情。1725it [01:01, 25.20it/s]1724:床前明月光,日月照清明。清风吹清风,清露洒清光。清晨动秋色,清景生秋光。清晨忽相见,清景自相寻。2300it [01:21, 25.76it/s]2299:床前明月光,日月照天明。一朝不可见,万里无穷年。一朝不可见,万里无穷年。一朝不可见,万里无穷年。2873it [01:41, 28.84it/s]2874:床前明月光,日月照清晨。一声一声声,万里无人知。一旦不可见,万里无人知。我来不可见,我有一生翁。3449it [02:01, 28.83it/s]3449:床前明月光,天上一星星。天子一星星,天地一星星。天子不可见,天子不可当。一旦不可见,万里无穷年。4025it [02:22, 25.22it/s]4024:床前明月光,日月照秋色。一夜不可见,一声不可见。我心不可穷,此夜不可见。我有一夜书,今来不可见。4600it [02:42, 25.13it/s]4599:床前明月光,天籁清且清。清风吹玉琴,清夜清泠泠。清风忽相见,白日忽相寻。我来不得意,此物何所从。5173it [03:02, 29.10it/s]5174:床前明月光,山下清风前。一朝一日月,一夜无一声。一声不可听,一声不可听。一声不可听,一声不可听。5748it [03:22, 28.85it/s]5749:床前明月光,日月照清明。一朝不可见,万事皆可怜。一朝不可见,万事皆可怜。一朝不可见,万里无所营。5758it [03:22, 28.39it/s]574it [00:21, 28.46it/s]574:床前明月光,日月照清景。清晨日月明,清夜清风起。清风吹白云,飒飒洒幽草。清风忽相见,白日忽相见。1151it [00:41, 24.91it/s]1149:床前明月光,日月照清明。一朝不可见,万事皆相寻。一朝不可见,万事皆相寻。一朝不可见,万事皆相寻。1723it [01:01, 28.64it/s]1724:床前明月光,日月照清明。一曲不可听,一声不可听。我闻清凉夜,日暮清风前。我有一杯酒,不如一夜眠。2300it [01:21, 25.46it/s]2299:床前明月光,日月照清明。一旦不可见,一生无所闻。一朝无一事,万里无穷年。一旦东西来,一生不可寻。2874it [01:41, 28.47it/s]2874:床前明月光,日月照清晨。清风吹玉珮,清露凝玉琴。清晨日月明,清夜清风前。一言不可遏,万物不可寻。3449it [02:01, 27.25it/s]3449:床前明月光,日月照清明。一声清泠泠,一夜生秋琴。一声清泠泠,万籁生其根。一弹一杯酒,一曲无所闻。4023it [02:22, 29.13it/s]4024:床前明月光,日月照清明。风吹风吹叶,风吹竹叶声。山光照寒色,水汽入寒清。幽人惜幽赏,清夜坐清晨。4598it [02:42, 28.48it/s]4599:床前明月光如玉,一点一声三五月。一声一声声不绝,一声一曲声声绝。一声一曲声,一声一声鸣。一声弹泪5175it [03:02, 25.92it/s]5174:床前明月光如玉,月下高楼月明里。玉楼一曲声不闻,玉筯双双泪如雨。君不见君王母,妾心不见妾。妾心不5750it [03:22, 25.29it/s]5749:床前明月光如玉,玉堂金缕金琅玕。玉壺金缕金缕缕,玉钗玉匣金麒麟。玉钗玉匣金麒麟,玉钗金缕金麒麟。5758it [03:23, 28.34it/s]574it [00:21, 29.19it/s]574:床前明月光如玉,月照青山天上天。玉楼月色照天色,金牓玲珑照天碧。玉蟾蜍上金麒麟,玉蟾一日金麒麟。1150it [00:41, 25.49it/s]1149:床前明月光如玉,一声一声无一声。一声一声声不尽,一声一声声不鸣。夜深月明照明月,月明风急吹寒声。1724it [01:01, 28.51it/s]1724:床前明月光如玉,一声一曲声声绝。一声一声声不绝,一声一曲声声绝。月明天上月明时,月照玉楼人不识。2298it [01:21, 28.42it/s]2299:床前明月光如玉,玉堂金殿金麒麟。玉皇一曲皆相见,一旦如今不可见。君不见君王,不见君王说。不知何处2873it [01:41, 28.60it/s]2874:床前明月光,日月明月明。清风吹玉琴,清夜清泠泠。一朝不可见,万物皆自然。我来不可见,此道无所闻。3450it [02:02, 25.42it/s]3449:床前明月光,日月照清明。人间有所遇,不觉心所宜。我有一尺书,不知身世间。我有一身心,不如身不闲。4026it [02:22, 24.80it/s]4024:床前明月光如玉,月明照水光明灭。一声一曲一声声,一曲琵琶一声断。一声一曲声声声,一声一曲声声声。4598it [02:42, 28.48it/s]4599:床前明月光如玉,玉堂金锁金盘盘。玉盘金缕珊瑚枝,玉盘金缕金琅玕。玉颜一笑不可见,玉女不敢夸金鸡。5174it [03:02, 28.05it/s]5174:床前明月光如玉,夜夜清风吹玉堂。一声一声声似梦,一声吹笛声声。不知何处是君去,一夜风光无限情。君5751it [03:23, 25.50it/s]5749:床前明月光如玉,夜夜相逢不相见。君不见君前见君意,今人不见君王侯。君不见君王不得意,今日相逢不相5758it [03:23, 28.30it/s]
疑问,Cross Entropy Loss怎么计算的,有时间研究一下:
loss = criterion(output, target.view(-1)) # torch.Size([15872, 8293]), torch.Size([15872])print(output)print(output.size())print(target.view(-1))print(target.view(-1).size())
tensor([[ 0.0548, -0.0241, 0.0339, ..., 0.0208, 0.0340, 0.0335], [ 0.0548, -0.0241, 0.0339, ..., 0.0208, 0.0340, 0.0335], [ 0.0548, -0.0241, 0.0339, ..., 0.0208, 0.0340, 0.0335], ..., [ 0.0807, -0.0297, 0.0327, ..., 0.0230, 0.0151, 0.0318], [ 0.0465, -0.0348, 0.0352, ..., 0.0250, 0.0118, 0.0279], [ 0.0407, -0.0019, 0.0164, ..., 0.0171, 0.0153, 0.0302]], device='cuda:0', grad_fn=)torch.Size([1240, 8293])tensor([8292, 8292, 8292, ..., 8290, 8290, 8290], device='cuda:0')torch.Size([1240])
看一下这个代码:top_index = output.data[0].topk(1)[1][0]
print(output)
tensor([[-12.6328, -5.6997, -14.0606, ..., -16.1376, -18.8906, -14.4208]], device='cuda:0', grad_fn=)
print(output.data)
tensor([[-12.6328, -5.6997, -14.0606, ..., -16.1376, -18.8906, -14.4208]], device='cuda:0')
print(output.data[0])
tensor([-12.6328, -5.6997, -14.0606, ..., -16.1376, -18.8906, -14.4208], device='cuda:0')
print(output.data[0].topk(1))
torch.return_types.topk(values=tensor([2.8427], device='cuda:0'),indices=tensor([7066], device='cuda:0'))
print(output.data[0].topk(1)[1])
tensor([7066], device='cuda:0')
print(output.data[0].topk(1)[1][0])
tensor(7066, device='cuda:0')
所以topk()是求tensor中某个dim的前k大或者前k小的值以及对应的index,想知道每个样本的最可能属于的那个类别,其实可以用torch.max得到。如果要使用topk,则k应该设置为1
epoch = 8
机中有白玉,不得一一枝。器小不可得,不如不可欺。学之不可用,不为人世衰。习习不可见,我心不可求。
epoch = 4
机生不得意,不是不可求。器以不得意,不知何所求。学道不自得,不知无所求。习之不可见,不得无所求。
杨柳花,春色白,一枝一枝春色。思君不见,春色,春风吹落花,春风满庭树,春风满庭树,春风满庭树,一杨柳花,春风吹,春风吹花,风吹花。思君不知春水绿,江南日暮江南北。程人不见,春风吹,江上月明天。
import numpy as np # tang.npz的压缩格式处理import os # 打开文件import torchimport torch.nn as nnfrom torch.autograd import Variableimport torch.nn.functional as Fimport matplotlib.pyplot as pltfrom torchnet import meterimport tqdmdef get_data(): if os.path.exists(data_path): datas = np.load(data_path, allow_pickle=True) # 加载数据 data = datas['data'] # numpy.ndarray word2ix = datas['word2ix'].item() # dic ix2word = datas['ix2word'].item() # dic return data, word2ix, ix2wordclass Net(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim): super(Net, self).__init__() self.hidden_dim = hidden_dim self.embeddings = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, self.hidden_dim, num_layers=2, batch_first=False) # lstm输入为:seq, batch, input_size # lstm输出为:seq * batch * 256; (2 * batch * 256,...) self.linear1 = nn.Linear(self.hidden_dim, vocab_size) def forward(self, input, hidden=None): seq_len, batch_size = input.size() if hidden is None: h_0 = input.data.new(2, batch_size, self.hidden_dim).fill_(0).float() c_0 = input.data.new(2, batch_size, self.hidden_dim).fill_(0).float() h_0, c_0 = Variable(h_0), Variable(c_0) else: h_0, c_0 = hidden embeds = self.embeddings(input) # (seq_len, batch_size, embedding_dim), (124,128,128) output, hidden = self.lstm(embeds, (h_0, c_0)) #(seq_len, batch_size, hidden_dim), (124,128,256) output = self.linear1(output.view(seq_len*batch_size, -1)) # ((seq_len * batch_size),hidden_dim), (15872,256) → (15872,8293) return output, hiddendef train(): modle = Net(len(word2ix), 128, 256) # 模型定义:vocab_size, embedding_dim, hidden_dim —— 8293 * 128 * 256 criterion = nn.CrossEntropyLoss() if torch.cuda.is_available() == True: print('Cuda is available!') modle = modle.cuda() optimizer = torch.optim.Adam(modle.parameters(), lr=1e-3) # 学习率1e-3 criterion = criterion.cuda() loss_meter = meter.AverageValueMeter() period = [] loss2 = [] for epoch in range(8): # 最大迭代次数为8 loss_meter.reset() for i, data in tqdm.tqdm(enumerate(dataloader)): # data: torch.Size([128, 125]), dtype=torch.int32 data = data.long().transpose(0,1).contiguous() # long为默认tensor类型,并转置, [125, 128] data = data.cuda() optimizer.zero_grad() input, target = Variable(data[:-1, :]), Variable(data[1:, :]) output, _ = modle(input) loss = criterion(output, target.view(-1)) # torch.Size([15872, 8293]), torch.Size([15872]) loss.backward() optimizer.step() loss_meter.add(loss.item()) # loss:tensor(3.3510, device='cuda:0', grad_fn=)loss.data:tensor(3.0183, device='cuda:0') period.append(i + epoch * len(dataloader)) loss2.append(loss_meter.value()[0]) if (1 + i) % 575 == 0: # 每575个batch可视化一次 print(str(i) +':' + generate(modle,'床前明月光', ix2word, word2ix)) torch.save(modle.state_dict(), '...your path/model_poet_2.pth') plt.plot(period, loss2) plt.show()def generate(model, start_words, ix2word, word2ix): # 给定几个词,根据这几个词生成一首完整的诗歌 txt = [] for word in start_words: txt.append(word) input = Variable(torch.Tensor([word2ix[' ']]).view(1,1).long()) # tensor([8291.]) → tensor([[8291.]]) → tensor([[8291]]) input = input.cuda() hidden = None num = len(txt) for i in range(48): # 最大生成长度 output, hidden = model(input, hidden) if i < num: w = txt[i] input = Variable(input.data.new([word2ix[w]])).view(1, 1) else: top_index = output.data[0].topk(1)[1][0] w = ix2word[top_index.item()] txt.append(w) input = Variable(input.data.new([top_index])).view(1, 1) if w == ' ': break return ''.join(txt)def gen_acrostic(model, start_words, ix2word, word2ix): result = [] txt = [] for word in start_words: txt.append(word) input = Variable( torch.Tensor([word2ix[' ']]).view(1, 1).long()) # tensor([8291.]) → tensor([[8291.]]) → tensor([[8291]]) input = input.cuda() hidden = None num = len(txt) index = 0 pre_word = ' ' for i in range(48): output, hidden = model(input, hidden) top_index = output.data[0].topk(1)[1][0] w = ix2word[top_index.item()] if (pre_word in { '。', '!', ' '}): if index == num: break else: w = txt[index] index += 1 input = Variable(input.data.new([word2ix[w]])).view(1,1) else: input = Variable(input.data.new([word2ix[w]])).view(1,1) result.append(w) pre_word = w return ''.join(result)def test(): modle = Net(len(word2ix), 128, 256) # 模型定义:vocab_size, embedding_dim, hidden_dim —— 8293 * 128 * 256 if torch.cuda.is_available() == True: modle.cuda() modle.load_state_dict(torch.load('...your path/model_poet.pth')) modle.eval() # txt = generate(modle, '床前明月光', ix2word, word2ix) # print(txt) txt = gen_acrostic(modle, '机器学习', ix2word, word2ix) print(txt)if __name__ == '__main__': data_path = '...your path/tang.npz' data, word2ix, ix2word = get_data() data = torch.from_numpy(data) dataloader = torch.utils.data.DataLoader(data, batch_size=10, shuffle=True, num_workers=1) # shuffle=True随机打乱 # train() test()
用PyTorch动手实现了LSTM的网络搭建,实现了基本的作诗功能,但是感觉效果一般,可能是训练epoch少了,有时间看看和思考:
接下看一下GAN,Attention,BERT,Transform,可以实战一下聊天机器人和英文文献CNN文本分类的代码复现
转载地址:http://zwtrn.baihongyu.com/