Exploring Large Language Models With R

Author

Wan Nor Arifin

Published

February 6, 2025

Preface

This book is my personal attempt to share my learning journey in implementing large language models (LLMs)1 within the R programming environment. There are not many tutorials available for this, so I hope this can serve as one of those resources. The content will be updated over several years as new developments occur and as I continue to learn more.

This book was written with the help of several of local LLMs, among which are Qwen, Llama, DeepSeek and Phi. Thanks buddies for your help (although I know they are inanimate). To ensure that I acknowledge their contributions, I have purposely included their names (denoted in between square brackets “[ LLM_NAME ]”) in some chapters and sections where the content relies largely on their generated texts. I used Quarto to prepare this book. If you want to learn more about Quarto books, please visit https://quarto.org/docs/books.

I would like to thank the Ollama (https://ollama.com/) and Open WebUI (https://openwebui.com/) projects for their tremendous contributions in making local LLMs easily accessible.

Thank you for joining me on this adventure as we explore integrating LLMs into R. This is an evolving book that will grow with your feedback and contributions.


  1. There are also small, tiny and super tiny language models. Let’s simply call all the models that rely on the Transformer architecture as LLMs.↩︎