sparse signals in the cross section of returns
play

Sparse Signals in the Cross-Section of Returns Alex Chinco, Adam D. - PowerPoint PPT Presentation

Sparse Signals in the Cross-Section of Returns Alex Chinco, Adam D. Clark-Joseph, and Mao Ye University of Illinois at Urbana-Champaign January 5th, 2016 Canonical Problem: Find an x that forecasts returns. Canonical Problem: Find an x that


  1. Sparse Signals in the Cross-Section of Returns Alex Chinco, Adam D. Clark-Joseph, and Mao Ye University of Illinois at Urbana-Champaign January 5th, 2016

  2. Canonical Problem: Find an x that forecasts returns.

  3. Canonical Problem: Find an x that forecasts returns.

  4. Canonical Problem: Find an x that forecasts returns.

  5. Canonical Problem: Find an x that forecasts returns.

  6. Step 1: Use intuition to identify x . Step 2: Use statistics to estimate x ’s quality r n,t = ˆ θ 0 + ˆ θ 1 · x t − 1 + ǫ n,t θ 1 | or R 2 is big. x is a good predictor if | ˆ

  7. Lagged returns of Family Dollar were a significant predictor for more than 20 % of all NYSE-listed oil and gas stocks during 20 -minute stretch on October 6 th, 2010.

  8. Lagged returns of Family Dollar were a significant predictor for more than 20 % of all NYSE-listed oil and gas stocks during 20 -minute stretch on October 6 th, 2010. Goal: Use statistics ( the LASSO ) to both identify and estimate x .

  9. Slogan: LASSO to identify and estimate x . 1. Out-of-sample predictability 2. Trading-strategy returns 3. Evidence of sparsity 4. More than just news 5. Economic implications

  10. how does it work?

  11. Want: Tool to identify and estimate largest coefficients. LASSO is OLS holding hands with a penalty function:   2 T � N � N 1   � � � min 2 · T · r n,t − ϑ 0 − ϑ n ′ · r n ′ ,t − 1 + λ · | ϑ n ′ | ϑ   t =1 n ′ =1 n ′ =1 To identify means to ignore ˆ ϑ n ′ small OLS coefs ˆ λ θ n ′ ϑ n ′ = sgn[ˆ ˆ θ n ′ ] · ( | ˆ θ n ′ | − λ ) +

  12. out-of-sample predictability

  13. Benchmark: Fit AR model using OLS in 30 -min windows. Make out-of-sample forecast in 31 st min, f OLS n,t . Run 1 reg. per (stock, month) to assess out-of-sample fit: � f OLS n,t − µ OLS � a n + ˜ r n,t +1 = ˜ b n · n + e n,t +1 σ OLS n

  14. Benchmark: Fit AR model using OLS in 30 -min windows. Make out-of-sample forecast in 31 st min, f OLS n,t . Run 1 reg. per (stock, month) to assess out-of-sample fit: � f OLS n,t − µ OLS � a n + ˜ r n,t +1 = ˜ b n · n + e n,t +1 σ OLS n Out-of-Sample Return Predictability Const � ˜ a n � 0 . 01 × 10 − 4 0 . 01 × 10 − 4 0 . 01 × 10 − 4 (19 . 42) (19 . 42) (19 . 42) � ˜ OLS b n � 3 . 57 × 10 − 4 3 . 00 × 10 − 4 (140 . 59) (136 . 06) LASSO � ˜ c n � 3 . 17 × 10 − 4 2 . 40 × 10 − 4 (166 . 77) (175 . 02) � Adj. R 2 � 5 . 43 % 4 . 56 % 8 . 08 %

  15. The LASSO: Fit the LASSO using same 30 -min windows. Make out-of-sample forecast in 31 st min, f LASSO . n,t Run 1 reg. per (stock, month) to assess out-of-sample fit: � f LASSO − µ LASSO � r n,t +1 = ˜ a n + ˜ c n · n,t n + e n,t +1 σ LASSO n

  16. The LASSO: Fit the LASSO using same 30 -min windows. Make out-of-sample forecast in 31 st min, f LASSO . n,t Run 1 reg. per (stock, month) to assess out-of-sample fit: � f LASSO − µ LASSO � r n,t +1 = ˜ a n + ˜ c n · n,t n + e n,t +1 σ LASSO n Out-of-Sample Return Predictability Const � ˜ a n � 0 . 01 × 10 − 4 0 . 01 × 10 − 4 0 . 01 × 10 − 4 (19 . 42) (19 . 42) (19 . 42) � ˜ OLS b n � 3 . 57 × 10 − 4 3 . 00 × 10 − 4 (140 . 59) (136 . 06) LASSO � ˜ c n � 3 . 17 × 10 − 4 2 . 40 × 10 − 4 (166 . 77) (175 . 02) � Adj. R 2 � 5 . 43 % 4 . 56 % 8 . 08 %

  17. Result: Using the LASSO increases out-of-sample return predictability by factor of 8 . 08 / 5 . 43 = 1 . 5 ! � f OLS � f LASSO n,t − µ OLS � − µ LASSO � a n + ˜ r n,t +1 = ˜ b n · n + ˜ c n · n,t n + e n,t +1 σ OLS σ LASSO n n Out-of-Sample Return Predictability Const � ˜ a n � 0 . 01 × 10 − 4 0 . 01 × 10 − 4 0 . 01 × 10 − 4 (19 . 42) (19 . 42) (19 . 42) � ˜ OLS b n � 3 . 57 × 10 − 4 3 . 00 × 10 − 4 (140 . 59) (136 . 06) LASSO � ˜ c n � 3 . 17 × 10 − 4 2 . 40 × 10 − 4 (166 . 77) (175 . 02) � Adj. R 2 � 5 . 43 % 4 . 56 % 8 . 08 %

  18. trading-strategy returns

  19. Out-of-Sample Return Predictability Const � ˜ a n � 0 . 01 × 10 − 4 0 . 01 × 10 − 4 0 . 01 × 10 − 4 (19 . 42) (19 . 42) (19 . 42) � ˜ OLS b n � 3 . 57 × 10 − 4 3 . 00 × 10 − 4 (140 . 59) (136 . 06) LASSO � ˜ c n � 3 . 17 × 10 − 4 2 . 40 × 10 − 4 (166 . 77) (175 . 02) � Adj. R 2 � 5 . 43 % 4 . 56 % 8 . 08 %

  20. TS Momentum: Ignoring look-ahead bias and trading costs, the LASSO generates monthly excess returns of (390 · 21) · 3 . 17 × 10 − 4 = 2 . 60 % where ˜ c n is the return to a time-series momentum strategy � f LASSO − µ LASSO � c n = 1 T · � T ˜ n,t n · r n,t +1 t =1 σ LASSO n Out-of-Sample Return Predictability Const � ˜ a n � 0 . 01 × 10 − 4 0 . 01 × 10 − 4 0 . 01 × 10 − 4 (19 . 42) (19 . 42) (19 . 42) � ˜ OLS b n � 3 . 57 × 10 − 4 3 . 00 × 10 − 4 (140 . 59) (136 . 06) LASSO � ˜ c n � 3 . 17 × 10 − 4 2 . 40 × 10 − 4 (166 . 77) (175 . 02) � Adj. R 2 � 5 . 43 % 4 . 56 % 8 . 08 %

  21. Result: LASSO-based strategy generates returns of 0 . 30 % per month net of trading costs. Predictability matters. Trading-Strategy Returns No Spread NBBO � r LASSO � 2 . 82 % 0 . 30 % n,t (128 . 47) (23 . 58)

  22. evidence of sparsity

  23. Significant Predictors per Stock in October 2009 24 11 1 Oct 5th Oct 12th Oct 19th Oct 26th Result: LASSO typically uses only 11 predictors.

  24. more than just news

  25. Result: News announcements don’t reveal how information is going to propagate through the market. Use data from RavenPack. # UsedBy n,t isUsed n → m,t hasNews n,t 0 . 65 0 . 01 (8 . 84) (0 . 21) hasNews n,t × newsRelevance n,t 0 . 88 (10 . 69) hasNews n,t × newsImpact n,t 2 . 01 (13 . 20) Adj. R 2 93 . 2 % 94 . 1 % 94 . 5 % 14 . 2 % Time FE � � � � Group FE � � � �

  26. economic implications

  27. Significant Predictors in October 2010 t s 1 9 1 , 2 −→ ←− t s 1 Oct 1st 4th5th 6 7 8 11 12 13 14 15 18 19 20 21 22 25 26 27 28 29Nov 1st Implication: There is structure between factors and noise.

  28. Slogan: LASSO to identify and estimate x . 1. Out-of-sample predictability 2. Trading-strategy returns 3. Evidence of sparsity 4. More than just news 5. Economic implications

Recommend


More recommend