http://math.tut.fi/~ruohonen/S_1.pdfTest: the Difference between Two Means • https://abtestguide.com/calc/ Hypothesis testing steps: 1) Define null hypothesis, H0: e.g., two samples belong to the same population, or there is no trend. Usually we would like to reject it. 2) Choose the test statistics for a given data, e.g., mean, trend, and a test level α , e.g., 5%. 3) Consider or create the nul..
In a CPA payment model, the expected value of an impression : Model Notation CTR Model of the training set is given by: Choose a simple prior of Wck , Then we use Laplace approximation which enables us to approximate the posterior with a Gaussian distribution.: the local minimum of the log-posteriorAlso we can derive the following covariance : However, since the sampling time for every variable ..
MLE VS MAPMaximum Likelihood Estimation (MLE) 와 Maximum A Posteriori (MAP)는 확률 분포 또는 그래픽 모델의 parameter를 추정 하는 방법임- MLE: Likelihood functio을 최대화 시키는- MAP: Posterior(Product of likelihood and prior)를 최대화 시키는 Thompson SamplingThompson sampling은 arm이 주는 reward 확률 기반으로 arm을 선택하는 방법이다.여기서 확률은 Bayesian 확률로, prior와 likelihood와 둘의 곱인 posterior가 존재한다.Reward prior 분포로 부터 시작 되며, 각 arm의 reward prior 분포부터 ran..
Boostingㅇ Boosting: try to fit the data by using multiple simpler models or so called base learner/weak learner.ㅇ Bagging(Random Forest)과 Boosting의 차이 - 공통점: Get N learners from 1 learner - 차이점: Bagging은 독립적인 N개의 모델, boosting은 앞 모델의 보완을 위한 새로운 모델을 더함 https://quantdare.com/what-is-the-difference-between-bagging-and-boosting/ - Boosting은 ensemble model의 일종임››- Adaptive basis function model로 sequen..