Title: Controlling FWER in Stepwise Regression Using Multiple Comparisons Abstract: Forward stepwise regression provides an approximation to the sparse feature selection problem and is used when the number of features is too large to manually search model space. In this setting, we desire a rule for stopping stepwise regression using hypothesis tests while controlling a notion of false rejections. That being said, forward stepwise regression is commonly considered to be ``data dredging" and not statistically sound. As the hypotheses tested by forward stepwise are determined by looking at the data, the resulting classical hypothesis tests are not valid. We present a simple solution which leverages classical multiple comparison methods in order to test the stepwise hypotheses using the max-t test proposal of Brown and Buja 2014. The resulting procedures are fast enough to be used in high-dimensional settings while controlling the family-wise error rate. Other procedures estimate new, computationally difficult p-values and perform selection while controlling FDR. While our error measure is more conservative, we achieve significantly higher power with massive gains in speed and simplicity. We provide both step-up and step-down variants of our procedure and demonstrate the different hypotheses considered in these cases. Furthermore, our proofs readily extend to more general correlation learning methods such as Sure Independent Screening.