Content-based recommendation is a popular framework for video recommendation, where the videos recommended are selected according to content similarity. Aiming at providing semantically similar videos to those already viewed by the user, most existing methods measure video similarity from tags or semantics-oriented features of videos. However, effective recommendations can also be based on affective content, which might be more significantly correlated to users' tastes and moods. We propose to combine semantic and affective information of videos which can be effectively extracted from tags and audio-visual features of videos, respectively. While individual features may not be sufficient to capture the full spectrum of users' tastes, our approach processes users' logs and applies a boosting strategy to learn a strong similarity fusion function. We conduct experiments to evaluate the performance of our method and the results show that our method successfully improves the performance of content-based recommendation.