Click here to flash read.
arXiv:2310.10688v4 Announce Type: replace
Abstract: Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
Click here to read this post out
ID: 812702; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: April 19, 2024, 7:31 a.m.
Changes:
Dictionaries:
Words:
Spaces:
Views: 7
CC:
No creative common's license
No creative common's license
Comments: