Inhalt

Aktueller Ordner: /

.ipynb (pretty JSON)

{
    "metadata": {
        "kernelspec": {
            "name": "python",
            "display_name": "Python (Pyodide)",
            "language": "python"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "python",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text\/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.8"
        }
    },
    "nbformat_minor": 4,
    "nbformat": 4,
    "cells": [
        {
            "cell_type": "markdown",
            "source": "\nQualitative social research has overslept cognitivism.\nSo was missed, the reconstruction of latent structures of meaning\nsecured by the construction of generative rules in the sense of algorithms. \nFor valid category systems (cf. Mayring) can be algorithmic rules\nspecify a finite automaton\n(vg. Buy, Paul.: ARS, Grammar-Induction, Parser, Grammar-Transduction).\n\nNow, posthumanism, poststructuralism, and transhumanism are parasitizing Opaque AI.\nAnd if they don't parasitize them, they are mutual symbionts.\n\nKarl Popper is then replaced by Harry Potter and\nqualitative social research and large language models become too little explanatory,\nbut impressive cargo-cult of a non-explanatory and everything\nveiled postmodernism.\n\nFor the algorithmic recursive sequence analysis it was shown that\nthat for the record of a sequence of actions\nat least one grammar can be specified\n(Inductorin Scheme, Parser in Pascal, Transduktor in Lisp, see Koop, P.).\n\nARS is a qualitative procedure\nthat latent rules of recorded action sequences\ncan be refuted to reconstruct.\n\nA large language model can be reprogrammed in such a way that it\ndetermined categories of a qualitative content analysis (cf. Mayring) \ncan reconstruct.\n\nHowever, the explanatory value of such a model is negligible,\nbecause it just isn't explained.\n\nTo show this, the following\nthe post-programming of a large language model is described.\n\nIn [ ]:\nFrom the corpus of encodings of a transcribed protocol, a deep language model can be used\na simulation of a sales talk can be run. \nThe algorithm of the deep language model then stands for the generative structure.\nProvide good introductions:\n    \nSteinwender, J., Schwaiger, R.:\nNeuronale Netze programmieren mit Python\n2. Auflage 2020\nISBN 978-3-8362-7452-4\n\nTrask, A. W.:\nNeuronale Netze und Deep Learning kapieren\nDer einfache Praxiseinstieg mit Beispielen in Python\n1. Auflage 2020\nISBN 978-3-7475-0017-0\n\nHirschle, J.:\nDeep Natural Language Processing\n1. Auflage 2022\nISBN 978-3-446-47363-8\n\nThe data structures in this text are taken from the above title by A. IN. Trask reprogrammed. \nThe deep language model for sales talks is then derived from this.\n    \n    \nNeural networks are multi-dimensional, mostly two-dimensional data fields of rational numbers. \nA hidden layer of predictive weights weights the input layer data, propagates the results to the next layer, and so on,\nuntil an open output layer then outputs them.\n\nIn the training phase, the weights are backpropagated, in the case of large language models with recurrent networks, with attention to the logged context.\n\nThe illustrative examples attempt to compare a team's scores by weighting the number of toes,\nthe games won so far and the number of fans to determine the future chances of winning.\n\n\n\n\n",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "Just an input date, here is the toe count:",
            "metadata": []
        },
        {
            "cell_type": "code",
            "source": "# The network\nweight= 0.1\ndef neural_network (input, weight):\n    output= input* weight\n    return output\n\n# Application of the network\nnumber_of_toes= [8.5, 9.5, 10, 9]\ninput= number_of_toes[0]\noutput= neural_network (input, weight)\nprint(output)",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "markdown",
            "source": "0.8500000000000001",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "Now with three input data (toe count, wins so far, number of fans):",
            "metadata": []
        },
        {
            "cell_type": "code",
            "source": "def propagation function(a,b):\n    assert(only== only (b))\n    output= 0\n    for i in range(len(s)):\n        output+= (a[i] * b[i])\n    return output\n\nweight= [0.1, 0.2, 0] \n    \ndef neural_network(input, weight):\n    output= propagationfunction(input,weight)\n    return output\n\n\ntoes=  [8.5, 9.5, 9.9, 9.0]\nwinrate= [0.65, 0.8, 0.8, 0.9]\nfans = [1.2, 1.3, 0.5, 1.0]\n\ninput= [toes[0],winrate[0],fans[0]]\noutput= neural_network(input,weight)\n\nprint(output)",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "markdown",
            "source": "0.9800000000000001",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "Now with the library numy (arrays, vectors, matrices):",
            "metadata": []
        },
        {
            "cell_type": "code",
            "source": "import numpy as the\nweight= the.array([0.1, 0.2, 0])\ndef neural_network(input, weight):\n    output= input.dots(weight)\n    return output\n    \ntoes= the.array([8.5, 9.5, 9.9, 9.0])\nwinrate= the.array([0.65, 0.8, 0.8, 0.9])\nfans       = the.array([1.2, 1.3, 0.5, 1.0])\n\n\ninput= the.array([toes[0],winrate[0],fans[0]])\noutput= neural_network(input,weight)\n\nprint(output)",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "markdown",
            "source": "0.9800000000000001",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "The weights can be adjusted until\nuntil the error is minimized.",
            "metadata": []
        },
        {
            "cell_type": "code",
            "source": "# Principle example\nweight= 0.5\ninput= 0.5\ndesired_prediction= 0.8\n\nincrement= 0.001\n\nfor iteration in range(1101):\n\n    forecast= input* weight\n    mistake= (forecast- desired_prediction)** 2\n\n    print(\"Error:\"+ str(error)+ \"Prediction:\"+ str(prediction))\n    \n    higher_prediction= input* (weight+ increment)\n    deeper_error= (desired_prediction- higher_prediction)** 2\n\n    higher_prediction= input* (weight- increment)\n    deeper_bugs= (desired_prediction- deeper_prediction)** 2\n\n    if(deeper_error<  higher_error):\n        weight= weight- increment\n        \n    if(deeper_error>  higher_error):\n        weight= weight+ increment",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "code",
            "source": "\n# Trask, A. W.:\n# Understand neural networks and deep learning\n# The simple introduction to practice with examples in Python\n#1st Edition 2020\n# ISBN 978-3-7475-0017-0\n\nimport numpy as e.g.\n\n# Object class array\nclass Tensor (object):\n    \n    def __init__(self,data,\n                 autograd=False,\n                 creators=None,\n                 creation_op=None,\n                 id=None):\n        \n        self.data = e.g..array(data)\n        self.autograd = autograd\n        self.grad = None\n\n        if(id is None):\n            self.id = e.g..random.randint(0,1000000000)\n        else:\n            self.id = id\n        \n        self.creators = creators\n        self.creation_op = creation_op\n        self.children = {}\n        \n        if(creators is not None):\n            for c in creators:\n                if(self.id not in c.children):\n                    c.children[self.id] = 1\n                else:\n                    c.children[self.id] += 1\n\n    def all_children_grads_accounted_for(self):\n        for id,cnt in self.children.items():\n            if(cnt != 0):\n                return False\n        return True \n        \n    def backward(self,grad=None, grad_origin=None):\n        if(self.autograd):\n \n            if(grad is None):\n                grad = Tensor(np.ones_like(self.data))\n\n            if(grad_origin is not None):\n                if(self.children[grad_origin.id] == 0):\n                    return\n                    print(self.id)\n                    print(self.creation_op)\n                    print(len(self.creators))\n                    for c in self.creators:\n                        print(c.creation_op)\n                    raise Exception(\"cannot backprop more than once\")\n                else:\n                    self.children[grad_origin.id] -= 1\n\n            if(self.grad is None):\n                self.grad = grad\n            else:\n                self.grad += grad\n            \n\n            assert grad.autograd == False\n            \n\n            if(self.creators is not None and \n               (self.all_children_grads_accounted_for() or \n                grad_origin is None)):\n\n                if(self.creation_op == \"add\"):\n                    self.creators[0].backward(self.grad, self)\n                    self.creators[1].backward(self.grad, self)\n                    \n                if(self.creation_op == \"sub\"):\n                    self.creators[0].backward(Tensor(self.grad.data), self)\n                    self.creators[1].backward(Tensor(self.grad.__neg__().data), self)\n\n                if(self.creation_op == \"I have\"):\n                    new = self.grad * self.creators[1]\n                    self.creators[0].backward(new , self)\n                    new = self.grad * self.creators[0]\n                    self.creators[1].backward(new, self)                    \n                    \n                if(self.creation_op == \"mm\"):\n                    c0= self.creators[0]\n                    c1= self.creators[1]\n                    new = self.grad.mm(c1.transpose())\n                    c0.backward(new)\n                    new = self.grad.transpose().mm(c0).transpose()\n                    c1.backward(new)\n                    \n                if(self.creation_op == \"transpose\"):\n                    self.creators[0].backward(self.grad.transpose())\n\n                if(\"sum\" in self.creation_op):\n                    dim = int(self.creation_op.split(\"_\")[1])\n                    self.creators[0].backward(self.grad.expand(dim,\n                                                               self.creators[0].data.shape[dim]))\n\n                if(\"expand\" in self.creation_op):\n                    dim = int(self.creation_op.split(\"_\")[1])\n                    self.creators[0].backward(self.grad.am(dim))\n                    \n                if(self.creation_op == \"neg\"):\n                    self.creators[0].backward(self.grad.__neg__())\n                    \n                if(self.creation_op == \"sigmoid\"):\n                    ones = Tensor(np.ones_like(self.grad.data))\n                    self.creators[0].backward(self.grad * (self * (ones - self)))\n                \n                if(self.creation_op == \"tanh\"):\n                    ones = Tensor(np.ones_like(self.grad.data))\n                    self.creators[0].backward(self.grad * (ones - (self * self)))\n                \n                if(self.creation_op == \"index_select\"):\n                    new_grad = e.g..zeros_like(self.creators[0].data)\n                    indices_ = self.index_select_indices.data.flatten()\n                    grad_ = grad.data.reshape(len(indices_), -1)\n                    for i in range(len(indices_)):\n                        new_grad[indices_[i]] += grad_[i]\n                    self.creators[0].backward(Tensor(new_grad))\n                    \n                if(self.creation_op == \"cross_entropy\"):\n                    dx = self.softmax_output - self.target_dist\n                    self.creators[0].backward(Tensor(dx))\n                    \n    def __add__(self, other):\n        if(self.autograd and other.autograd):\n            return Tensor(self.data + other.data,\n                          autograd=True,\n                          creators=[self,other],\n                          creation_op=\"add\")\n        return Tensor(self.data + other.data)\n\n    def __neg__(self):\n        if(self.autograd):\n            return Tensor(self.data * -1,\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"neg\")\n        return Tensor(self.data * -1)\n    \n    def __sub__(self, other):\n        if(self.autograd and other.autograd):\n            return Tensor(self.data - other.data,\n                          autograd=True,\n                          creators=[self,other],\n                          creation_op=\"sub\")\n        return Tensor(self.data - other.data)\n    \n    def __mul__(self, other):\n        if(self.autograd and other.autograd):\n            return Tensor(self.data * other.data,\n                          autograd=True,\n                          creators=[self,other],\n                          creation_op=\"I have\")\n        return Tensor(self.data * other.data)    \n\n    def sum(self, dim):\n        if(self.autograd):\n            return Tensor(self.data.I am\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"sum_\"+str(dim))\n        return Tensor(self.data.am(dim))\n    \n    def expand(self, dim,copies):\n\n        trans_cmd = list(range(0,len(self.data.shape)))\n        trans_cmd.insert(dim,len(self.data.shape))\n        new_data = self.data.repeat(copies).reshape(list(self.data.shape) + [copies]).transpose(trans_cmd)\n        \n        if(self.autograd):\n            return Tensor(new_data,\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"expand_\"+str(dim))\n        return Tensor(new_data)\n    \n    def transpose(self):\n        if(self.autograd):\n            return Tensor(self.data.transpose(),\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"transpose\")\n        \n        return Tensor(self.data.transpose())\n    \n    def mm(self, x):\n        if(self.autograd):\n            return Tensor(self.data.dot(x.data),\n                          autograd=True,\n                          creators=[self,x],\n                          creation_op=\"mm\")\n        return Tensor(self.data.dot(x.data))\n    \n    def sigmoid(self):\n        if(self.autograd):\n            return Tensor(1 \/ (1 + e.g..exp(-self.data)),\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"sigmoid\")\n        return Tensor(1 \/ (1 + e.g..exp(-self.data)))\n\n    def tanh(self):\n        if(self.autograd):\n            return Tensor(np.fishy(self.data),\n                          autograd=True,\n                          creators=[self],\n                          creation_op=\"tanh\")\n        return Tensor(np.fishy(self.data))\n    \n    def index_select(self, indices):\n\n        if(self.autograd):\n            new = Tensor(self.data[indices.data],\n                         autograd=True,\n                         creators=[self],\n                         creation_op=\"index_select\")\n            new.index_select_indices = indices\n            return new\n        return Tensor(self.data[indices.data])\n    \n    def softmax(self):\n        temp = e.g..exp(self.data)\n        softmax_output = temp \/ e.g..sum(temp,\n                                       axis=len(self.data.shape)-1,\n                                       keepdims=True)\n        return softmax_output\n    \n    def cross_entropy(self, target_indices):\n\n        temp = e.g..exp(self.data)\n        softmax_output = temp \/ e.g..sum(temp,\n                                       axis=len(self.data.shape)-1,\n                                       keepdims=True)\n        \n        t = target_indices.data.flatten()\n        p = softmax_output.reshape(len(t),-1)\n        target_dist = e.g..eye(p.shape[1])[t]\n        loss = -(e.g.log(p) * (target_dist)).sum(1).mean()\n    \n        if(self.autograd):\n            out = Tensor(loss,\n                         autograd=True,\n                         creators=[self],\n                         creation_op=\"cross_entropy\")\n            out.softmax_output = softmax_output\n            out.target_dist = target_dist\n            return out\n\n        return Tensor(loss)\n        \n    \n    def __repr__(self):\n        return str(self.data.__repr__())\n    \n    def __str__(self):\n        return str(self.data.__str__())  \n\nclass Layer(object):\n    \n    def __init__(self):\n        self.parameters = list()\n        \n    def get_parameters(self):\n        return self.parameters\n\n    \nclass SGD(object):\n    \n    def __init__(self, parameters, alpha=0.1):\n        self.parameters = parameters\n        self.alpha = alpha\n    \n    def zero(self):\n        for p in self.parameters:\n            p.grad.data *= 0\n        \n    def step(self, zero=True):\n        \n        for p in self.parameters:\n            \n            p.data -= p.grad.data * self.alpha\n            \n            if(zero):\n                p.grad.data *= 0\n\n\nclass Linear(Layer):\n\n    def __init__(self, n_inputs, n_outputs, bias=True):\n        super().__heat__()\n        \n        self.use_bias = bias\n        \n        IN= e.g..random.randn(n_inputs, n_outputs) * e.g..sqrt(2.0\/(n_inputs))\n        self.weight = Tensor(W, autograd=True)\n        if(self.use_bias):\n            self.bias = Tensor(np.zeros(n_outputs), autograd=True)\n        \n        self.parameters.append(self.weight)\n        \n        if(self.use_bias):        \n            self.parameters.append(self.bias)\n\n    def forward(self, input):\n        if(self.use_bias):\n            return input.mm(self.weight)+self.bias.expand(0,len(input.data))\n        return input.mm(self.weight)\n\n\nclass Sequential(Layer):\n    \n    def __init__(self, layers=list()):\n        super().__heat__()\n        \n        self.layers = layers\n    \n    def add(self, layer):\n        self.layers.append(layer)\n        \n    def forward(self, input):\n        for layer in self.layers:\n            input = layer.forward(input)\n        return input\n    \n    def get_parameters(self):\n        params = list()\n        for l in self.layers:\n            params += l.get_parameters()\n        return params\n\n\nclass Embedding(Layer):\n    \n    def __init__(self, vocab_size, dim):\n        super().__heat__()\n        \n        self.vocab_size = vocab_size\n        self.dim = dim\n        \n        # this random initialiation style is just a convention from word2vec\n        self.weight = Tensor((np.random.rand(vocab_size, dim) - 0.5) \/ dim, autograd=True)\n        \n        self.parameters.append(self.weight)\n    \n    def forward(self, input):\n        return self.weight.index_select(input)\n\n\nclass Tanh(Layer):\n    def __init__(self):\n        super().__heat__()\n    \n    def forward(self, input):\n        return input.tanh()\n\n\nclass Sigmoid(Layer):\n    def __init__(self):\n        super().__heat__()\n    \n    def forward(self, input):\n        return input.sigmoid()\n    \n\nclass CrossEntropyLoss(object):\n    \n    def __init__(self):\n        super().__heat__()\n    \n    def forward(self, input, target):\n        return input.cross_entropy(target)\n\n    \n# Sprachmodell Long Short Term Memory\nclass LSTMCell(Layer):\n    \n    def __init__(self, n_inputs, n_hidden, n_output):\n        super().__heat__()\n\n        self.n_inputs = n_inputs\n        self.n_hidden = n_hidden\n        self.n_output = n_output\n\n        self.xf = Linear(n_inputs, n_hidden)\n        self.xi = Linear(n_inputs, n_hidden)\n        self.xo= Linear(n_inputs, n_hidden)        \n        self.xc = Linear(n_inputs, n_hidden)        \n        \n        self.hf = Linear(n_hidden, n_hidden, bias=False)\n        self.hi = Linear(n_hidden, n_hidden, bias=False)\n        self.to= Linear(n_hidden, n_hidden, bias=False)\n        self.hc = Linear(n_hidden, n_hidden, bias=False)        \n        \n        self.w_ho= Linear(n_hidden, n_output, bias=False)\n        \n        self.parameters += self.xf.get_parameters()\n        self.parameters += self.xi.get_parameters()\n        self.parameters += self.xo.get_parameters()\n        self.parameters += self.xc.get_parameters()\n\n        self.parameters += self.hf.get_parameters()\n        self.parameters += self.hi.get_parameters()        \n        self.parameters += self.to.get_parameters()        \n        self.parameters += self.hc.get_parameters()                \n        \n        self.parameters += self.w_ho.get_parameters()        \n    \n    def forward(self, input, hidden):\n        \n        prev_hidden = hidden[0]        \n        prev_cell = hidden[1]\n        \n        f = (self.xf.forward(input) + self.hf.forward(prev_hidden)).sigmoid()\n        i = (self.xi.forward(input) + self.hi.forward(prev_hidden)).sigmoid()\n        O= (self.xo.forward(input) + self.to.forward(prev_hidden)).sigmoid()        \n        g = (self.xc.forward(input) + self.hc.forward(prev_hidden)).tanh()\n        c = (f * prev_cell) + (i * g)\n\n        h = O* c.tanh()\n        \n        output = self.w_ho.forward(h)\n        return output, (h, c)\n    \n    def init_hidden(self, batch_size=1):\n        init_hidden = Tensor(np.zeros((batch_size,self.n_hidden)), autograd=True)\n        init_cell = Tensor(np.zeros((batch_size,self.n_hidden)), autograd=True)\n        init_hidden.data[:,0] += 1\n        init_cell.data[:,0] += 1\n        return (init_hidden, init_cell)\n\nimport sys,random,math\nfrom collections import Counter\nimport numpy as e.g.\nimport sys\n\ne.g..random.seed(0)\n\n# Import the VKG BODY\nf = open('VKGKORPUS.TXT','r')\nraw = f.read()\nf.close()\n\n\n\nvocab = list(set(raw))\nword2index = {}\nfor i,word in enumerate(vocab):\n    word2index[word]=i\nindices = e.g..array(list(map(lambda x:word2index[x], raw)))\n\nembed = Embedding(vocab_size=only (vocab), dim=512)\nmodel = LSTMCell(n_inputs=512, n_hidden=512, n_output=just(vocab))\nmodel.w_ho.weight.data *= 0\n\ncriterion = CrossEntropyLoss()\noptimum= SGD(parameters=model.get_parameters() + embed.get_parameters(), alpha=0.05)\n\ndef generate_sample(n=30, init_char=' '):\n    s = \"\"\n    hidden = model.init_hidden(batch_size=1)\n    input = Tensor(np.array([word2index[init_char]]))\n    for i in range(n):\n        rnn_input = embed.forward(input)\n        output, hidden = model.forward(input=rnn_input, hidden=hidden)\n#         output.data *= 25\n#         temp_dist = output.softmax()\n#         temp_dist \/= temp_dist.sum()\n\n#         m = (temp_dist > np.random.rand()).argmax()\n        m = output.data.argmax()\n        c = vocab[m]\n        input = Tensor(np.array([m]))\n        s += c\n    return s\n\nbatch_size = 16\nbptt= 25\nn_batches = int((index.shape[0] \/ (batch_size)))\n\ntrimmed_indices = indices[:n_batches*batch_size]\nbatched_indices = trimmed_indices.reshape(batch_size, n_batches).transpose()\n\ninput_batched_indices = batched_indices[0:-1]\ntarget_batched_indices = batched_indices[1:]\n\nn_bptt= int(((n_batches-1) \/ bptt))\ninput_batches = input_batched_indices[:n_bptt*bptt].reshape(n_bptt,bptt,batch_size)\ntarget_batches = target_batched_indices[:n_bptt*bptt].reshape(n_bptt, bptt, batch_size)\nmin_loss = 1000\n\n# Training of the neural network\ndef train(iterations=400):\n    for iter in range(iterations):\n        total_loss = 0\n        n_loss = 0\n\n        hidden = model.init_hidden(batch_size=batch_size)\n        batches_to_train = len(input_batches)\n    #     batches_to_train = 32\n        for batch_i in range(batches_to_train):\n\n            hidden = (Tensor(hidden[0].data, autograd=True), Tensor(hidden[1].data, autograd=True))\n\n            losses = list()\n            for t in range(bptt):\n                input = Tensor(input_batches[batch_i][t], autograd=True)\n                rnn_input = embed.forward(input=input)\n                output, hidden = model.forward(input=rnn_input, hidden=hidden)\n\n                target = Tensor(target_batches[batch_i][t], autograd=True)    \n                batch_loss = criterion.forward(output, target)\n\n                if(t == 0):\n                    losses.append(batch_loss)\n                else:\n                    losses.append(batch_loss + losses[-1])\n\n            loss = losses[-1]\n\n            loss.backward()\n            optimum.step()\n            total_loss += loss.data \/ bptt\n\n            epoch_loss = e.g..exp(total_loss \/ (batch_i+1))\n            min_loss =1000\n            if(epoch_loss < min_loss):\n                min_loss = epoch_loss\n                print()\n\n            log = \"\\r Iter:\" + str(iter)\n            log += \" - Alpha:\" + str (opt.alpha)[0:5]\n            log += \" - Batch \"+str(batch_i+1)+\"\/\"+str(len(input_batches))\n            log += \" - Min Loss:\" + str(min_loss)[0:5]\n            log += \" - Loss:\" + str(epoch_loss)\n            if(batch_i == 0):\n                log += \" - \" + generate_sample(n=70, init_char='T').replace(\"\\n\",\" \")\n            if(batch_i % 1 == 0):\n                sys.stdout.write(log)\n\n        optimum.alpha *= 0.99\n\n\n\n\ntrain(100)\n\ndef generate_sample(n=30, init_char=' '):\n    s = \"\"\n    hidden = model.init_hidden(batch_size=1)\n    input = Tensor(np.array([word2index[init_char]]))\n    for i in range(n):\n        rnn_input = embed.forward(input)\n        output, hidden = model.forward(input=rnn_input, hidden=hidden)\n        output.data *= 15\n        temp_dist = output.softmax()\n        temp_dist \/= temp_dist.sum()\n\n#         m = (temp_dist > np.random.rand()).argmax() # sample from predictions\n        m = output.data.argmax() # take the max prediction\n        c = vocab[m]\n        input = Tensor(np.array([m]))\n        s += c\n    return s\nprint(generate_sample(n=500, init_char='\\n'))",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "code",
            "source": "print(generate_sample(n=500, init_char='\\n'))\n\n",
            "metadata": [],
            "outputs": [],
            "execution_count": null
        },
        {
            "cell_type": "markdown",
            "source": "Output of a generated example:\n",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "KBG VBG\nKBBD VBBD KBA VBA KAE VAE KAA VAA\nKBBD VBBD KBA VBA KBBD VBBD KBA VBA KBBD VBBD KBA VBA KAE VAE \nKBG VBG\nKBBD VBBD KBA VBA KAE VAE KAE VAE KAE VAE KAE VAE KAA VAA\nKBBD VBBD KBA VBA KAE VAE KAE VAE KAA VAA\nKBBD VBBD KBA VBA KAE VAE KAA VAA\nKBBD VBBD KBA VBA KBBD VBBD KBA VBA KAE VAE KAA VAA\nKBG VBG\nKBBD KBA VBA KBBD VBBD KBA VBA KAE VAE KAE VAE KAA VAA\nKBBD VBBD KBA VBA KBBD KBA VBA KBBD VBBD KBA VBA KBBD VBBD KBA \nKAE VAE KAA VAA\nKBBD VBBD KBA VBA KAE VAE KAE VAE VAE KAA VAA\n\n",
            "metadata": []
        },
        {
            "cell_type": "markdown",
            "source": "Contrary to cognitivist models\n(ARS, Koop, P. Grammar Induction, Parser, Grammar Transduction)\nsuch a large language model explains nothing and therefore becomes\nLarge language model of postmodernism, posthumanism and transhumanism\ncelebrated with parasitic intent.\n\nIf you want to write a textbook on the rules of sales pitches,\nbut gets a software agent who likes to make sales calls,\nyou have done a bad job at a very high level.\n\n",
            "metadata": []
        }
    ]
}