These were inspired by Andrej Karpathy’s blog article wherein he used character-RNNs to produce work similar to Shakespeare, Wikipedia articles, etc.

This means that it can ‘remember’ and create language structures such as sentences by remembering sequences of words and create words by remembering sequences of characters.The latter is called character-RNN as it uses characters as inputs to produce words.
RNNs are an interesting topic in machine learning and natural language processing due to their somewhat magical ability to learn language structure based simply on sequences and predictions without explicit definitions of grammar.
LSTMs make up for the shortcomings of traditional RNNs as their architecture allows them to retain information for longer periods allowing the model to connect information that was inputted further in the past.
Donald Trump Tweets
Quotes
Shakespeare's Works
Harry Potter Novels
NumPy Library
You can view the complete project here.

Tweets outputted by the model. What is interesting is that it tried to generate its own links and hashtags.
You can view the complete project here.

Some interesting quotes generated from different architectures/hyper-parameters. The underlined phrases are the seeds used and the words that follow are produced by the model.

You can view the complete project here.

The above articles do not make much sense grammatically or semantically. While they do follow the format of a dialog, their contents are not indicative of a conversation the characters are having with one another. Rather, each dialog seems to belong to a separate conversation. (This may actually pass of as artistic though!) However, these articles from this particular model were interesting to me based on the fact that the writing style is similar to the iambic pentameter used by Shakespeare.

While there are still some spelling errors and some random words, this model has learned the dialog format of Shakespeare’s play pretty well. There are indications of overfitting wherein the model has memorized the names of the characters (Duchess of York, Romeo, King Lewis etc.) and produced phrases such as “I do beseech you, sir”. However, these could just be due to the sampling temperature and not due to overfitting. Overall, there is definitely some improvement in the language and format of the dialogues from model 1.
You can view the complete project here.

However, since the length of the sentences and the format of the text was different from that of Shakespeare, I increased the sequence length to 100. I also used a different prime, ‘Harry’.

You can view the complete project here.

This shows the model’s attempt at generating a function.
