https://arxiv.org/abs/2008.07669 HiPPO: Recurrent Memory with Optimal Polynomial ProjectionsA central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed. We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto polynomial bases. Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem. As special cases, our framework yields a short derivation of the recent Legendre Memory Unit (LMU) from first principles, and generalizes the ubiquitous gating mechanism of recurrent neural networks such as GRUs. This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale. HiPPO-LegS enjoys the theoretical benefits of timescale robustness, fast updates, and bounded gradients. By incorporating the memory dynamics into recurrent neural networks, HiPPO RNNs can empirically capture complex temporal dependencies. On the benchmark permuted MNIST dataset, HiPPO-LegS sets a new state-of-the-art accuracy of 98.3%. Finally, on a novel trajectory classification task testing robustness to out-of-distribution timescales and missing data, HiPPO-LegS outperforms RNN and neural ODE baselines by 25-40% accuracy.arxiv.org측도론으로 딥러닝 논문써서스탠포드 박사과정에서 프린스턴 교수된사례는 개좆밥인거지이논문 나왔을때 리뷰어들 반응이 측도론으로 이렇게 하는거 처음본다였음저정도 못한새끼들 다병신이지스탠포드 병신 프린스턴 병신 뉴립스 병신서울대 1학년 해석학개론에 나오는 컴팩트셋 내용으로 논문쓰고고작 교수밖에 몬했노나도 주댕이로는 노벨상 수상자야 ㅋㅋㅋㅋㅋ
저런거 몰라도 교수되고 탑컨퍼 오랄 내고 다 하는데?
https://proceedings.neurips.cc/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Review.html 코멘트에 처음 본다는 말 없는데?
ai한테 물어봐서 얻은 대답일텐데 뭘 검증함... 걍 환각이지
친구야... "나 이런거 연구한다. 개쩔지?" 하는 초딩 보는것 같음