Paper Title
Predictive Text Using LSTM
Abstract
The discipline of linguistic prognostication, often referred to as "the art of next word prediction," resides within
the domain of natural language synthesis. Its primary endeavor lies in the premonition of the ensuing lexical unit within a
given contextual framework, thereby emblematic of a salient facet of the machine learning paradigm. Antecedent scholars have
engaged in the discourse, expounding upon an array of methodological contrivances, including the sophisticated Recurrent
Neural Networks and the avant-garde Federated Text Models. Within the purview of this particular inquiry, erudite researchers
have espoused the application of the intricate Long Short Term Memory (LSTM) model, diligently subjecting it to a rigorous
training regimen spanning 70 epochs. The wellspring of data for this scholarly undertaking was harvested through the
process of web scraping, constituting a compendium encompassinga notable literary work authored by Franz Kafka. The
apparatus of choice for this expedition comprised TensorFlow, Keras, NumPy, Matplotlib serving as the quintessential
armamentarium. Exporting the model into JSON format wasfacilitated through the instrumentality of TensorFlow.js.
Furthermore, the coding found its locus in the venerable Google Colab.
Keywords – Machine Learning Paradigm, Next Word Prediction, Linguistic Prognostication, LSTM.