A polynomial autoencoder beats PCA on transformer embeddings
Polynomial autoencoders outperform PCA for dimensionality reduction on transformer embeddings, offering a simple non-linear alternative for high-dimensional token representations.
Excerpt
HN · 101 points · 30 comments
Read at source: https://ivanpleshkov.dev/blog/polynomial-autoencoder/