Self-Attention algorithm helper functions and demonstration vignettes of increasing depth on how to construct the Self-Attention algorithm, this is based on Vaswani et al. (2017) <doi:10.48550/arXiv.1706.03762>, Dan Jurafsky and James H. Martin (2022, ISBN:978-0131873216) <https://web.stanford.edu/~jurafsky/slp3/> "Speech and Language Processing (3rd ed.)" and Alex Graves (2020) <https://www.youtube.com/watch?v=AIiwuClvH6k> "Attention and Memory in Deep Learning".
| Version: | 0.4.0 | 
| Suggests: | covr, knitr, rmarkdown, testthat (≥ 3.0.0) | 
| Published: | 2023-11-10 | 
| DOI: | 10.32614/CRAN.package.attention | 
| Author: | Bastiaan Quast | 
| Maintainer: | Bastiaan Quast <bquast at gmail.com> | 
| License: | GPL (≥ 3) | 
| NeedsCompilation: | no | 
| Materials: | README, NEWS | 
| CRAN checks: | attention results | 
| Reference manual: | attention.html , attention.pdf | 
| Vignettes: | Complete Self-Attention from Scratch (source, R code) Simple Self-Attention from Scratch (source, R code) | 
| Package source: | attention_0.4.0.tar.gz | 
| Windows binaries: | r-devel: attention_0.4.0.zip, r-release: attention_0.4.0.zip, r-oldrel: attention_0.4.0.zip | 
| macOS binaries: | r-release (arm64): attention_0.4.0.tgz, r-oldrel (arm64): attention_0.4.0.tgz, r-release (x86_64): attention_0.4.0.tgz, r-oldrel (x86_64): attention_0.4.0.tgz | 
| Old sources: | attention archive | 
| Reverse imports: | rnn, transformer | 
Please use the canonical form https://CRAN.R-project.org/package=attention to link to this page.