Recent advances in signal processing for the detection of Steady-State Visual Evoked Potentials (SSVEPs) have moved away from traditionally calibrationless methods, such as canonical correlation analysis, and towards algorithms that require substantial training data. In general, this has improved detection rates, but SSVEP-based brain-computer interfaces (BCIs) now suffer from the requirement of costly calibration sessions. Here, we address this issue by applying transfer learning techniques to SSVEP detection. Our novel Adaptive-C3A method incorporates an unsupervised adaptation algorithm that requires no calibration data. Our approach learns SSVEP templates for the target user and provides robust class separation in feature space leading to increased classification accuracy. Our method achieves significant improvements in performance over a standard CCA method as well as a transfer variant of the state-of-the art Combined-CCA method for calibrationless SSVEP detection.