본문 바로가기

데이터 과학 스터디/논문 리뷰

[Forecasting at Scale] Facebook 시계열 패키지 prophet 논문 리뷰 https://peerj.com/preprints/3190.pdf 2017년 9월에 발표한 Article로서, "at Scale"은 '대규모로', 혹은 '전체의' 등으로 해석된다. 논문 내용을 살펴보면 Prophet이 지향하는 바는 다양한 시계열 예측 프로세스의 모든 과정을 시계열 데이터 분석을 할 줄 모르는 사람도 할 수 있도록 시계열 분석의 A-to-Z를 진행할 수 있게끔 만든 패키지라고 보면 된다. Introduction 대부분의 기업에서는 capacity planning (다음 달 생산을 위해 사람이 얼마나 필요한지, 원자재가 얼마나 필요한지) 등 다양한 이유로 Forecasting 즉 예측을 진행한다. 하지만 완전히 자동화된 예측 모듈은 튜닝하기가 어렵고 대부분 유동적이지 못해 새로운 가정(as.. 더보기
[Pre-training of Deep Bidirectional Transformers for Language Understanding] BERT 논문 리뷰 [Pre-training of Deep Bidirectional Transformers for Language Understanding] BERT https://arxiv.org/abs/1810.04805 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to .. 더보기
[NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE] Attention 논문 리뷰 [NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE] Attention https://arxiv.org/abs/1409.0473 Neural Machine Translation by Jointly Learning to Align and Translate Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network tha.. 더보기
[Word2Vec] CBOW, Skip-gram 논문 리뷰 [Efficient Estimation of Word Representations in Vector Space] Efficient Estimation of Word Representations in Vector Space We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best per arxiv.org Introduc.. 더보기
[Transformer: Attention Is All You Need] 논문리뷰 Paper: Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org Abstract 논문 발표 당시 주도하던 *sequence transduction 모델은 encoder와 decoder를 포함한 복잡한 recurrent (RNN) 혹은 conv.. 더보기
[Generative Adversarial Nets] 논문 리뷰 Paper: Generative Adversarial Networks We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that arxiv.org Abstract 논문에서는 적대적과정(adversarial process)을 통해 generative model(생성모델)을 추정하는 새로운 방식을 제시한다. 생성 모.. 더보기
[Fully Convolutional Networks for Semantic Segmentation] 논문 리뷰 Fully Convolutional Networks for Semantic Segmentation Paper: Fully Convolutional Networks for Semantic Segmentation Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build arxiv.org Sementic Segme.. 더보기
[Faster R-CNN]논문 리뷰 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Reference https://arxiv.org/abs/1506.01497 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection.. 더보기